Abstract
In this paper, we present the first attempt to analyse differing levels of social involvement in free standing conversing groups (or the so-called F-formations) from static images. In addition, we enrich state-of-the-art F-formation modelling by learning a frustum of attention that accounts for the spatial context. That is, F-formation configurations vary with respect to the arrangement of furniture and the non-uniform crowdedness in the space during mingling scenarios. The majority of prior works have considered the labelling of conversing group as an objective task, requiring only a single annotator. However, we show that by embracing the subjectivity of social involvement, we not only generate a richer model of the social interactions in a scene but also significantly improve F-formation detection. We carry out extensive experimental validation of our proposed approach by collecting a novel set of multi-annotator labels of involvement on the publicly available Idiap Poster Data, the only multi-annotator labelled database of free standing conversing groups that is currently available.
Original language | English |
---|---|
Title of host publication | Proceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016 |
Editors | Lisa O'Conner |
Place of Publication | Los Alamitos, CA |
Publisher | IEEE |
Pages | 1086-1095 |
Number of pages | 10 |
ISBN (Electronic) | 978-1-4673-8851-1 |
DOIs | |
Publication status | Published - 2016 |
Event | CVPR 2016: 29th IEEE Conference on Computer Vision and Pattern Recognition - Las Vegas, United States Duration: 26 Jun 2016 → 1 Jul 2016 |
Conference
Conference | CVPR 2016 |
---|---|
Country/Territory | United States |
City | Las Vegas |
Period | 26/06/16 → 1/07/16 |
Keywords
- Psychology
- Semantics
- Image edge detection
- Context
- Visualization
- Surveillance
- Bridges