embodied AI, embodied audition
New EU project EARS on Embodied Audition for RobotS
29/12/13 15:20 Gespeichert in:projects
The European project EARS, starting on January 1, 2014 for a period of three years, will address the fundamental problem of how to model and implement robot audition. The Cognitive Robotics group at Humboldt-Universität zu Berlin (Prof. Verena Hafner) will be one of six partners: University Erlangen-Nuremberg, Germany (Prof. Walter Kellermann, project coordinator), Imperial College London, UK (Prof. Patrick Naylor), Ben-Gurion University of the Negev, Israel (Prof. Boaz Rafaely), INRIA, France (Prof. Radu Horaud), and Aldebaran Robotics, France (Dr. Rodolphe Gelin).
Summary. The success of future natural intuitive human-robot interaction (HRI) will critically depend on how responsive the robot will be to all forms of human expressions and how well it will be aware of its environment. With acoustic signals distinctively characterizing physical environments and speech being the most effective means of communication among humans, truly humanoid robots must be able to fully extract the rich auditory information from their environment and to use voice communication as much as humans do. While vision-based HRI is well developed, current limitations in robot audition do not allow for such an effective, natural acoustic human-robot communication in real-world environments, mainly because of the severe degradation of the desired acoustic signals due to noise, interference and reverberation when captured by the robot’s microphones. To overcome these limitations, EARS will provide intelligent “ears” with close-to-human auditory capabilities and use it for HRI in complex real-world environments. Novel microphone arrays and powerful signal processing algorithms shall be able to localize and track multiple sound sources of interest and to extract and recognize the desired signals.
After fusion with robot vision, embodied robot cognition will then derive HRI actions and knowledge on the entire scenario, and feed this back to the acoustic interface for further auditory scene analysis. As a prototypical application, EARS will consider a welcoming robot in a hotel lobby offering all the above challenges. Representing a large class of generic applications, this scenario is of key interest to industry and, thus, a leading European robot manufacturer will integrate EARS’s results into a robot platform for the consumer market and validate it. In addition, the provision of open-source software and an advisory board with key players from the relevant robot industry should help to make EARS a turnkey project for promoting audition in the robotics world.
Update 03/2014: The EARS webpage is now online.
Summary. The success of future natural intuitive human-robot interaction (HRI) will critically depend on how responsive the robot will be to all forms of human expressions and how well it will be aware of its environment. With acoustic signals distinctively characterizing physical environments and speech being the most effective means of communication among humans, truly humanoid robots must be able to fully extract the rich auditory information from their environment and to use voice communication as much as humans do. While vision-based HRI is well developed, current limitations in robot audition do not allow for such an effective, natural acoustic human-robot communication in real-world environments, mainly because of the severe degradation of the desired acoustic signals due to noise, interference and reverberation when captured by the robot’s microphones. To overcome these limitations, EARS will provide intelligent “ears” with close-to-human auditory capabilities and use it for HRI in complex real-world environments. Novel microphone arrays and powerful signal processing algorithms shall be able to localize and track multiple sound sources of interest and to extract and recognize the desired signals.
After fusion with robot vision, embodied robot cognition will then derive HRI actions and knowledge on the entire scenario, and feed this back to the acoustic interface for further auditory scene analysis. As a prototypical application, EARS will consider a welcoming robot in a hotel lobby offering all the above challenges. Representing a large class of generic applications, this scenario is of key interest to industry and, thus, a leading European robot manufacturer will integrate EARS’s results into a robot platform for the consumer market and validate it. In addition, the provision of open-source software and an advisory board with key players from the relevant robot industry should help to make EARS a turnkey project for promoting audition in the robotics world.
Update 03/2014: The EARS webpage is now online.