Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction

Abstract : In human-human interaction, para-verbal and non-verbal communication are naturally aligned and synchronized. The difficulty encountered during the coordination between speech and head gestures concerns the conveyed meaning, the way of performing the gesture with respect to speech characteristics , their relative temporal arrangement, and their coordinated organization in a phrasal structure of utterance. In this research, we focus on the mechanism of mapping head gestures and speech prosodic characteristics in a natural human-robot interaction. Prosody patterns and head gestures are aligned separately as a parallel multi-stream HMM model. The mapping between speech and head gestures is based on Coupled Hidden Markov Models (CHMMs), which could be seen as a collection of HMMs, one for the video stream and one for the audio stream. Experimental results with Nao robot are reported.
Type de document :
Communication dans un congrès
The European Conference on Mobile Robots (ECMR), Sep 2011, Orebro, Sweden. 2012, 〈10.1007/978-3-642-27449-7_14〉
Liste complète des métadonnées

Littérature citée [25 références]  Voir  Masquer  Télécharger

https://hal-ensta.archives-ouvertes.fr/hal-01169983
Contributeur : Amir Aly <>
Soumis le : mardi 13 octobre 2015 - 19:22:14
Dernière modification le : vendredi 8 décembre 2017 - 14:42:16
Document(s) archivé(s) le : jeudi 14 janvier 2016 - 18:20:56

Fichier

Aly_ECMR2011.pdf
Fichiers produits par l'(les) auteur(s)

Licence


Domaine public

Identifiants

Collections

Citation

Amir Aly, Adriana Tapus. Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction. The European Conference on Mobile Robots (ECMR), Sep 2011, Orebro, Sweden. 2012, 〈10.1007/978-3-642-27449-7_14〉. 〈hal-01169983v2〉

Partager

Métriques

Consultations de la notice

46

Téléchargements de fichiers

216