Isabelle Ferrané
Isabelle Ferrané
Assistant Professor, University of Toulouse III, IRIT
Bestätigte E-Mail-Adresse bei - Startseite
Zitiert von
Zitiert von
Two-handed gesture recognition and fusion with speech to command a robot
B Burger, I Ferrané, F Lerasle, G Infantes
Autonomous Robots 32, 129-147, 2012
Rackham: An interactive robot-guide
A Clodic, S Fleury, R Alami, R Chatila, G Bailly, L Brethes, M Cottret, ...
ROMAN 2006-The 15th IEEE International Symposium on Robot and Human …, 2006
Blip10000: A social video dataset containing spug content for tagging and retrieval
S Schmiedeke, P Xu, I Ferrané, M Eskevich, C Kofler, MA Larson, ...
Proceedings of the 4th ACM Multimedia Systems Conference, 96-101, 2013
Automatic speech recognition predicts speech intelligibility and comprehension for listeners with simulated age-related hearing loss
L Fontan, I Ferrané, J Farinas, J Pinquier, J Tardieu, C Magnen, P Gaillard, ...
Journal of Speech, Language, and Hearing Research 60 (9), 2394-2405, 2017
Multimodal interaction abilities for a robot companion
B Burger, I Ferrané, F Lerasle
Computer Vision Systems: 6th International Conference, ICVS 2008 Santorini …, 2008
A multi-modal perception based assistive robotic system for the elderly
C Mollaret, AA Mekonnen, F Lerasle, I Ferrané, J Pinquier, B Boudet, ...
Computer Vision and Image Understanding 149, 78-97, 2016
Using phonologically weighted Levenshtein distances for the prediction of microscopic intelligibility
L Fontan, I Ferrané, J Farinas, J Pinquier, X Aumont
Annual conference Interspeech (INTERSPEECH 2016), 650-654, 2016
Speaker role recognition to help spontaneous conversational speech detection
B Bigot, I Ferrané, J Pinquier, R André-Obrecht
Proceedings of the 2010 international workshop on Searching spontaneous …, 2010
Looking for relevant features for speaker role recognition.
B Bigot, J Pinquier, I Ferrané, R André-Obrecht
Interspeech, 1057-1060, 2010
Perceiving user's intention-for-interaction: A probabilistic multimodal data fusion scheme
C Mollaret, AA Mekonnen, I Ferrané, J Pinquier, F Lerasle
2015 IEEE International Conference on Multimedia and Expo (ICME), 1-6, 2015
Overview of mediaeval 2012 genre tagging task
S Schmiedeke, C Kofler, I Ferrané
MediaEval 2012 Workshop, Pisa, Italy, 2012
Mutual assistance between speech and vision for human-robot interaction
B Burger, F Lerasle, I Ferrané, A Clodic
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2008
Le Projet BDLEX de base de données lexicales du français écrit et parlé
G Pérennou, M de Calmès, I Ferrané, JM Pécatte
Séminaire lexique, Toulouse, 41-56, 1992
A similarity-based approach for audiovisual document classification using temporal relation analysis
ZAA Ibrahim, I Ferrane, P Joly
EURASIP Journal on Image and Video Processing 2011, 1-19, 2011
KLIMT: Intermediations Technologies and Multimedia Indexing
V Conan, I Ferrané, P Joly, C Vasserot
Third International Workshop on Content-Based Multimedia Indexing (CBMI'03 …, 2003
Exploiting speaker segmentations for automatic role detection. An application to broadcast news documents
B Bigot, I Ferrané, J Pinquier
2010 International Workshop on Content Based Multimedia Indexing (CBMI), 1-6, 2010
A multi-modal perception based architecture for a non-intrusive domestic assistant robot
C Mollaret, AA Mekonnen, J Pinquier, F Lerasle, I Ferrané
2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI …, 2016
Detecting individual role using features extracted from speaker diarization results
B Bigot, I Ferrané, J Pinquier, R André-Obrecht
Multimedia Tools and Applications 60, 347-369, 2012
Benchmark for kitchen20, a daily life dataset for audio-based human action recognition
M Moreaux, MG Ortiz, I Ferrané, F Lerasle
2019 International Conference on Content-Based Multimedia Indexing (CBMI), 1-6, 2019
Temporal relation analysis in audiovisual documents for complementary descriptive information
ZAA Ibrahim, I Ferrane, P Joly
International Workshop on Adaptive Multimedia Retrieval, 141-154, 2005
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20