In this paper, we investigate the use of proxemics and dynamics for automatically identifying conversing groups, or so-called F-formations. More formally we aim to automatically identify whether wearable sensor data coming from 2 people is indicative of F-formation membership. We also explore the problem of jointly detecting membership and more descriptive information about the pair relating to the role they take in the conversation (i.e. speaker or listener). We jointly model the concepts of proxemics and dynamics using binary proximity and acceleration obtained through a single wearable sensor per person. We test our approaches on the publicly available MatchNMingle dataset which was collected during real-life mingling events. We find out that fusion of these two modalities performs significantly better than them independently, providing an AUC of 0.975 when data from 30-second windows are used. Furthermore, our investigation into roles detection shows that each role pair requires a different time resolution for accurate detection.

Original languageEnglish
Title of host publication2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)
PublisherIEEE
Pages147-153
Number of pages7
ISBN (Electronic)978-1-7281-3891-6
ISBN (Print)978-1-7281-3892-3
DOIs
Publication statusPublished - 2019
Event8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2019 - Cambridge, United Kingdom
Duration: 3 Sep 20196 Sep 2019

Conference

Conference8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2019
CountryUnited Kingdom
CityCambridge
Period3/09/196/09/19

    Research areas

  • conversing groups, F-formation detection, recurrent neural networks, role identification, wearable sensing

ID: 68942508