A visual bag of words method for interactive qualitative localization and mapping

Abstract : Localization for low cost humanoid or animal-like personal robots has to rely on cheap sensors and has to be robust to user manipulations of the robot. We present a visual localization and map-learning system that relies on vision only and that is able to incrementally learn to recognize the different rooms of an apartment from any robot position. This system is inspired by visual categorization algorithms called bag of words methods that we modified to make fully incremental and to allow a user-interactive training. Our system is able to reliably recognize the room in which the robot is after a short training time and is stable for long term use. Empirical validation on a real robot and on an image database acquired in real environments are presented.
Type de document :
Communication dans un congrès
International Conference on Robotics and Automation, 2007, Italy. pp.3921 - 3926, 2007, 〈10.1109/ROBOT.2007.364080〉
Liste complète des métadonnées

Littérature citée [25 références]  Voir  Masquer  Télécharger

https://hal-ensta.archives-ouvertes.fr/hal-00640996
Contributeur : David Filliat <>
Soumis le : mardi 15 novembre 2011 - 11:01:26
Dernière modification le : mercredi 29 novembre 2017 - 15:50:47
Document(s) archivé(s) le : vendredi 16 novembre 2012 - 10:57:09

Fichier

Filliat_ICRA07.pdf
Fichiers éditeurs autorisés sur une archive ouverte

Identifiants

Collections

Citation

David Filliat. A visual bag of words method for interactive qualitative localization and mapping. International Conference on Robotics and Automation, 2007, Italy. pp.3921 - 3926, 2007, 〈10.1109/ROBOT.2007.364080〉. 〈hal-00640996〉

Partager

Métriques

Consultations de la notice

220

Téléchargements de fichiers

818