Since early infancy, the ability of humans at interacting with each other is substantially strengthened by vision, with several visual processes tuned to support prosocial behaviour. For instance, a natural predisposition to look at human faces or to detect biological motion is present at birth. More refined abilities - as the understanding and anticipation of others' actions and intentions- progressively develop with age, leading, in a few years, to a full capability of interaction based on mutual understanding, joint coordination and collaboration.
A key challenge of robotics research nowadays is to provide artificial agents with similar advanced visual perception skills, with the ultimate goal of designing machines able to recognise and interpret both explicit and implicit communication cues embedded in human behaviours. These achievements pave the way for the large-scale use of Human-Robot Interaction applications on a variety of contexts, ranging from the design of personal robots, to physical, social and cognitive rehabilitation.
* Computational Models of Visual Perception for Interaction
* Perception of Intentions and Actions
* Vision for Robotics and Artificial intelligence in Social Contexts
* Neuroscientific bases of Interaction
* Development of Social Cognition in Humans
* Social Signals Recognition and Analysis
* Human-Robot Interaction
* Emotion Recognition for Interaction
* Machine Learning for Visual Perception