Skip to content

Attention, cette page n'est pas encore à jour.

J\\érôme and others Dupire, 2006. A Toolbox For Movable Books Digitization. In VAST\'08. 9th Int. Symposium on Virtual Reality, Archaeology and Cultural Heritage, Nicosia, Chypre, pp 61--63.
Beno\\^\\it Lathiére & Dominique Archambault, 2014. Improving Accessibility of Lectures for Deaf and Hard-of-Hearing Students Using a Speech Recognition System and a Real-Time Collaborative Editor. In Computers Helping People with Special Needs, Springer pp 490--497.
M. Zbakh, Z. Haddad & J. Lopez Krahe, 2015. A cognitive approach for signs classification Development of an online French Sign Language dictionar. Pattern Recognition Letters. Volume 67, pp 28--38. PDF
S. Fajardo Flores & D. Archambault, 2014. Evaluation of a prototype of a multimodal interface to work with mathematics. AMSE Modeling C.. Volume 75, pp 106--118.
Y. Chen, Z. Haddad & J. Lopez Krahe, 2014. A new approach for automatic transformation of pedagogic images into tactile images. AMSE Modeling C.. Volume 75, pp 43--54.
M. Zbakh, H. Daassi-Gnaba & J. Lopez Krahe, 2011. Talking Head Generating French Cued Speech for Deaf and Hard of Hearing People. AMSE-journals. Volume 71, Issue 3, pp 117--127. PDF
M. Zbakh, H. Daassi-Gnaba & J. Lopez Krahe, 2010. Combinaison de reconnaissance de la parole, reconnaissance des émotions et tête parlante codeuse en LPC pour les personnes sourdes et malentendantes. Sciences et Technologies pour le Handicap. Volume 3, pp 239--253. PDF
Zehira Haddad, Yong Chen & Jaime Lopez Krahe, 2015. A pattern recognition approach to make accessible the geographic images for blind and visually impaired. In Image Processing (ICIP), 2015 IEEE International Conference on, pp 3205--3209.
Mohammed Zbakh, Zehira Haddad & Jaime Lopez Krahe, 2014. Toward a reversed dictionary of French Sign Language (FSL) on the Web. In Computers Helping People with Special Needs, Springer pp 423--430.
Yong Chen, Zehira Haddad & Jaime Lopez Krahe, 2014. Contribution to the Automation of the Tactile Images Transcription Process. In Computers Helping People with Special Needs, Springer pp 642--649.
Benoît Lathiére & Dominique Archambault, 2014. Improving Accessibility of Lectures for Deaf and Hard-of-Hearing Students Using a Speech Recognition System and a Real-Time Collaborative Editor. In Computers Helping People with Special Needs, Springer pp 490--497.
Silvia Fajardo Flores & Dominique Archambault, 2014. Multimodal Interface for Working with Algebra: Interaction between the Sighted and the Non Sighted. In Computers Helping People with Special Needs, Springer pp 606--613.
Godefroy Clair, Lassana Sangare, Pierre Hamant, Gérard Uzan, Jérome Dupire & Dominique Archambault, 2013. Technological Mediation for Visually impaired People in exhibition Context. In AAATE, pp 4.
G. Uzan, S. M'Ballo, P. Wagstaff & M. Dejeammes, 2011. SOLID: A Model of the information requirements in transport systems for sensory impaired people. In CInvited presentation, 18th World Congress on Intelligent Transport Systems Orlando,
G. Uzan, S. M'Ballo, M. Dejeammes & C. Rennesson, 2010. Travel visually imp aired in urban areas: analysis of security needs, location and orientation, and possible developments. In Proceedings 12th international conference on mobility and transport for elderly and disabled persons,
Dominique Archambault, 2010. Entertainment software accessibility: introduction to the special thematic session. Springer
Dominique Archambault, 2009. Non visual access to mathematical contents: State of the art and prospective. In Proceedings of the WEIMS Conference, pp 43--52.
Hela Daassi-Gnaba & Jaime Lopez Krahe, 2009. Universal combined system: speech recognition, emotion recognition and talking head for deaf and hard of hearing people. In Conférence-AAATE, pp 503--508.
Dominique Archambault & Vincent Spiewak, 2009. Odt2dtbook - OpenOffice. org Save-as-Daisy Extension. Assistive Technology from Adapted Equipment to Inclusive Environments: AAATE 2009. Volume 25, IOS Press pp 212.
Hela Daassi-Gnaba & Jaime Lopez Krahe, 2009. Combination of speech recognition, emotion recognition and talking head for deaf and hard of hearing people. In Conférence-DRT4ALL,
G. Thomas, S. Natkin & D. Archambault, 2009. Pyvox 2: an audio game accessible to visually impaired people playable without visual nor verbal instruction.. In .,
Philippe Alexia, Jaime Lopez Krahe & Jean-Paul Mazeau, 2008. Clavier virtuel multimodal. Applications à des situations de handicap moteur. In Conférence-Handicap 08,
Toshihiro Kanahori, Dominique Archambault & Masakazu Suzuki, 2008. Universal Authoring System for Braille Materials by Collaboration of UMCL and Infty. In Lecture Notes in Computer Science, Springer pp 883-887.
P. Foucher, C. Moreau, C. Robalo, C. Tranchant, N. Zouba & J. Lopez Krahe, 2007. Online French Sign Language (LSF) classification system: from LSF to French. pp 66--70. uman07 : International Conference on Human Machine Interaction- IEEE.
P. Foucher, G. Uzan & Jaime Lopez Krahe, 2007. Mobile Interface to recognize pedestrian traffic lights: Needs and propositions. In Human07: International Conference on Human Machine Interaction-IEEE, pp 356--361.
J. Lopez Krahe, 2006. L'innovation numérique au service des handicapés dans l'université numérique. In Colloque international L'université à l'ère du numérique, pp 22--44.
Jérôme and others Dupire, 2006. A Toolbox For Movable Books Digitization. In VAST'08. 9th Int. Symposium on Virtual Reality, Archaeology and Cultural Heritage, Nicosia, Chypre, pp 61--63.
Pierre Cubaud, Jérôme Dupire & Alexandre Topol, 2007. Fluid interaction for the document in context. In Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries, pp 504--504.
G. Uzan & A. Teixeira, 2003. Interactions vocales pour non-voyants : de l'évaluation de services à celle d'un modèle d'interaction. In . Actes IHM,
P. Foucher, D. Moreno Eddowes & J. Lopez Krahe, 2005. Traffic Light Silhouettes Recognition using Fourier Descriptors. In Visualization, Imaging, And Image Processing: Fifth IASTED International Conference Proceedings,
G. Baudoin, O. Venard, G. Uzan, A. Rousseau, Y. Benabou, A. Paumier & J. Cesbron, 2005. The RAMPE Project : Interactive, Auditive Information System for the Mobility of Blind People in Public Transports. In Proc. of the 5th International Conference on Intelligent Transportation Systems Telecommunications, ITST 2005,
G. Baudoin, O. Venard, G. Uzan, A. Rousseau, Y. Benabou, A. Paumier & J. Cesbron, 2005. How can blinds get information in public transports using PDA? The RAMPE auditive man machine interface. Proc. 8th, AAATE. Citeseer pp 304--316.
A. Rojbi, J-C. Schmitt & G. Alquie, 2004. Optimisation de la segmentation vidéo en fonction de la nature et de la complexité du flux vidéo. In International Symposium on Image Video Communications over fixed and mobile networks,
A. Rojbi, 2005. Système de segmentation adaptatif. In Méthodologies et Heuristiques pour l’Optimisation des Systèmes Industriels,
Elisabetta Bevacqua, Dirk Heylen, Catherine Pelachaud & Marion Tellier, 2007. Facial feedback signals for ECAs. In AISB, Issue 7, pp 328--334.
Elisabetta Bevacqua, Maurizio Mancini, Radoslaw Niewiadomski & Catherine Pelachaud, 2007. An expressive ECA showing complex emotions. In Proceedings of the AISB annual convention, Newcastle, UK, pp 208--216.
Elisabetta Bevacqua, Amaryllis Raouzaiou, Christopher Peters, George Caridakis, Kostas Karpouzis, Catherine Pelachaud & Maurizio Mancini, 2006. Multimodal sensing, interpretation and copying of movements by a virtual agent. In Perception and Interactive Technologies, Springer pp 164--174.
Elisabetta Bevacqua & Catherine Pelachaud, 2004. Expressive audio-visual speech. Computer Animation and Virtual Worlds. Volume 15, Issue 3-4, Wiley Online Library pp 297--304.
E. Bevacqua, M. Mancini & C. Pelachaud, 2004. Speaking with emotions. In AISB 2004 Convention, pp 58.
Elisabetta Bevacqua & Cathérine Palachaud, 2003. Triphone-based Coarticulation Model. In AVSP 2003-International Conference on Audio-Visual Speech Processing,
Stéphanie Buisine, Sarkis Abrilian, Radoslaw Niewiadomski, Jean-Claude Martin, Laurence Devillers & Catherine Pelachaud, 2006. Perception d'émotions mélangées: Du corpus vidéo à l'agent expressif. In WACA'06 second Workshop francophone sur les Agents Conversationnels Animés, pp 83--91.
Stéphanie Buisine, Sarkis Abrilian, Radoslaw Niewiadomski, Jean-Claude Martin, Laurence Devillers & Catherine Pelachaud, 2006. Perception of blended emotions: From video corpus to expressive agent. In Intelligent virtual agents, pp 93--106.
Ginevra Castellano & Maurizio Mancini, 2007. Analysis of emotional gestures from videos for the generation of expressive behaviour in an eca. In Proceedings of GW2007-7th International Workshop on Gesture in Human-Computer Interaction and Simulation 2007--POSTER SESSION, pp 14.
Fred Charles, Samuel Lemercier, Thurid Vogt, Nikolaus Bee, Maurizio Mancini, Jérôme Urbain, Marc Price & Eli André, 2007. Affective interactive narrative in the callas project. In Virtual Storytelling. Using Virtual Reality Technologies for Storytelling, Springer pp 210--213.
NE. Chafai, C. Pelachaud & D. Pelé, 2007. A semantic description of gesture in BML. In Proceedings of AISB’07 Annual Convention Workshop on Language, Speech and Gesture for Expressive Characters, Newcastle, England.
NE. Chafai, C. Pelachaud & D. Pelé, 2007. From a typology to the relevant physical dimensions of gestures. In International Society for Gesture Studies Conference 2007: Integrating Gestures, Chicago.
Nicolas Ech Chafai, Catherine Pelachaud & Dan Pelé, 2006. Gesture expressivity modulations in an ECA application. In Intelligent Virtual Agents, pp 181--192.
Björn Hartmann, Maurizio Mancini, Stéphanie Buisine & Catherine Pelachaud, 2005. Design and evaluation of expressive gesture synthesis for embodied conversational agents. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, pp 1095--1096.
S. Kopp, B. Krenn, S. Marsella, A. Marshall, C. Pelachaud, H. Pirker, K. Thorisson & H. Vilhjàlmsson, 2006. Towards a common framework for multimodal generation in embodied conversation agents: a behavior markup language. In International Conference on Virtual Agents,
Myriam Lamolle, Maurizio Mancini, Catherine Pelachaud, Sarkis Abrilian, Jean-Claude Martin & Laurence Devillers, 2005. Contextual factors and adaptative multimodal human-computer interaction: multi-level specification of emotion and expressivity in embodied conversational agents. In Modeling and Using Context, Springer pp 225--239. Paris.
Maurizio Mancini, Ginevra Castellano, Elisabetta Bevacqua & Christopher Peters, 2007. Copying Behaviour of Expressive Motion. In Computer Vision/Computer Graphics Collaboration Techniques, Springer pp 180--191. Rocquencourt, France.
Maurizio Mancini & Catherine Pelachaud, 2007. Implementing distinctive behavior for conversational agents. In Gesture-Based Human-Computer Interaction and Simulation, Springer pp 163--174. Lisbon, Portugal.
Maurizio Mancini & Catherine Pelachaud, 2007. Dynamic behavior qualifiers for conversational agents. In Intelligent Virtual Agents, pp 112--124. Paris, France.
Jean-Claude Martin, Sarkis Abrilian, Laurence Devillers, Myriam Lamolle, Maurizio Mancini & Catherine Pelachaud, 2005. Levels of Representation in the Annotation of Emotion for the Specification of Expressivity in ECAs. In Intelligent Virtual Agents, pp 405--417. Grèce.
Vincent Maya, Myriam Lamolle & Catherine Pelachaud, 2004. Influences and Embodied Conversational Agents: Tools for automatic processing of effects. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 3, pp 1306--1307. New-York, USA.
Vincent Maya, Myriam Lamolle & Catherine Pelachaud, 2004. Embodied Conversational Agents and Influences. In ECAI, Volume 16, pp 1057. Valencia, Espagne.
Vincent Maya, Myriam Lamolle & Catherine Pelachaud, 2004. Influenceable Embodied Conversational Agents and Influences. In ECAI, Volume 16, pp 1057. Leeds, Angleterre.
Radoslaw Niewiadomski & Catherine Pelachaud, 2007. Fuzzy similarity of facial expressions of embodied agents. In Intelligent Virtual Agents, pp 86--98. Paris, France.
Radoslaw Niewiadomski & Catherine Pelachaud, 2007. Model of facial expressions management for an embodied conversational agent. In Affective Computing and Intelligent Interaction, Springer pp 12--23. Lisbon, Portugal.
Magalie Ochs, Catherine Pelachaud & David Sadek, 2007. An empathic rational dialog agent. In Affective Computing and Intelligent Interaction, Springer pp 338--349. Lisbonne, Portugal.
Magalie Ochs, Radoslaw Niewiadomski, Catherine Pelachaud & David Sadek, 2005. Intelligent expressions of emotions. In Affective computing and intelligent interaction, Springer pp 707--714. Chine.
Magalie Ochs, Catherine Pelachaud & David Sadek, 2007. Emotion elicitation in an empathic virtual dialog agent. In Prooceedings of the Second European Cognitive Science Conference (EuroCogSci, Delphes, Grèce.
Magalie Ochs, Catherine Pelachaud & David Sadek, 2006. A Coding Scheme for Designing: Computational Model of Emotion Elicitation. In The Workshop Programme Corpora for Research on Emotion and Affect Tuesday 23 rd May 2006, pp 11. Gênes.
Magalie Ochs, Karl Devooght, David Sadek & Catherine Pelachaud, 2006. A computational model of capability-based emotion elicitation for rational agent. In Proceedings of the 1st workshop on Emotion and Computing-Current Research and Future Impact, Bremen, Germany, Bremen, Allemagne.
Christopher Peters, 2007. Designing an emotional and attentive virtual infant. In Affective Computing and Intelligent Interaction, Springer pp 386--397. Lisbon, Portugal.
Christopher Peters, Catherine Pelachaud, Elisabetta Bevacqua, Magalie Ochs, Nicolas Ech Chafai & Maurizio Mancini, 2006. Social capabilities for autonomous virtual characters. In International Digital Games Conference, Games Congress 2006, pp 37--48. Portalegre, Portugal.
Christopher Peters, Catherine Pelachaud, Elisabetta Bevacqua, Maurizio Mancini & Isabella Poggi, 2005. A model of attention and interest using gaze behavior. In Intelligent virtual agents, pp 229--240. Grèce.
Catherine Pelachaud & Maurizio Mancini, 2007. Studies on Behavior Expressivity for an Embodied Conversational Agents. Citeseer Chicago.
C. Pelachaud, C. Peters & E. Bevacqua, 2007. Model of gaze behaviors in conversation settings. In International Society for Gesture Studies Conference 2007: Integrating Gestures, Chicago.
Catherine Pelachaud & Massimo Bilvi, 2003. Modelling gaze behavior for conversational agents. In Intelligent Virtual Agents, pp 93--100. Germany.
Isabella Poggi, Catherine Pelachaud & E Magno Caldognetto, 2003. Gestural mind markers in ECAs. In Gesture-Based Communication in Human-Computer Interaction, Springer pp 338--349. Melbourne.
Nasser Rezzoug, Philippe Gorce, Alexis Heloir, Sylvie Gibet, Nicolas Courty, Jean-François Kamp, Franck Multon & Catherine Pelachaud, 2006. Virtual humanoids endowed with expressive communication gestures: the HuGEx project. In IEEE International Conference on Systems, Man and Cybernetics, 2006. SMC'06., Volume 5, pp 4445--4450. Taiwan.
Thomas Rist, Markus Schmitt, Catherine Pelachaud & Massimo Bilvi, 2003. Towards a simulation of conversations with expressive embodied speakers and listeners. In Computer Animation and Social Agents, 2003. 16th International Conference on, pp 5--10. New-Brunswick.
Marc Schröder, Laurence Devillers, Kostas Karpouzis, Jean-Claude Martin, Catherine Pelachaud, Christian Peter, Hannes Pirker, Björn Schuller, Jianhua Tao & Ian Wilson, 2007. What should a generic emotion markup language be able to represent?. In Affective Computing and Intelligent Interaction, Springer pp 440--451. Lisbon.
P. Wallis & C. Pelachaud, 2005. Social Presence Cues for Virtual Humanoids. In AISB 2005, University of Hertfordshire.
L. Jacob, Y. Chen & G. Uzan, 2015. Correction vocalisée et mise en accessibilité de documents numérisés : l'exemple du projet "Correct". In Colloque Jeunes Chercheuses Jeunes Chercheurs - Handicap, Vieillissement, Indépendance, Technologies, pp 73--80.
Y. Chen & Z. Haddad, 2015. Apport dans la reconnaissance des symboles pour l'accès haptique aux images. In Colloque Jeunes Chercheuses Jeunes Chercheurs - Handicap, Vieillissement, Indépendance, Technologies, pp 30--39.
S. Jebali & G. Clair, 2015. Conception d'interfaces multimodales pour personnes handicapées hospitalisées. In Colloque Jeunes Chercheuses Jeunes Chercheurs - Handicap, Vieillissement, Indépendance, Technologies, pp 15--22.
Y. Chen, Z. Haddad & J. Lopez Krahe, 2014. Vers une transformation automatiques des images pédagogiques en images tactiles. In IFRATH 8eme Conférence Handicap 2014, ISBN 978-2-9536899-4-5.
S. Jebali, A. Lassoued, O. Fettis & Z. Haddad, 2014. Allzheimer : une application Android pour accompagner les patients atteints d'Alzheimer. In IFRATH 8eme Conférence Handicap 2014, ISBN 978-2-9536899-4-5.
Godefroy Clair, Lassana Sangare, Pierre Hamant, Gérard Uzan, Jérôme Dupire & Dominique Archambault, 2013. Accessibilité pour les aveugles dans le cadre de la visite d'un musée. In Colloque Jeunes Chercheuses Jeunes Chercheurs-Handicap, Vieillissement, Indépendance, Technologies, pp 4.
M. Zbakh, I. Lopez Fontana, K. Ahnache, A. Mortera & Jaime Lopez Krahe, 2010. Pictokids: un logiciel de communication pictographique avec sortie textuelle ou vocale. In Conférence-HANDICAP 2010, pp 155--160.
ZN. Belhabib & A. Rojbi, 2010. Conception d'un dispositif de pointage-navigation accessible et adaptatif pour plusieurs cas d'handicap moteur. In Conférence-HANDICAP 2010,
M. Zbakh, H. Daassi-Gnaba & J. Lopez Krahe, 2010. Tête parlante codeuse en LPC pour les sourds et les malentendants. In Conférence-HANDICAP 2010, pp 16--21.
Nicolas Ech Chafai, Magalie Ochs, Christopher Peters, Maurizio Mancini, Elisabetta Bevacqua & Catherine Pelachaud, 2007. Des agents virtuels sociaux et émotionnels pour l'interaction humain-machine. In Proceedings of the 19th International Conference of the Association Francophone d'Interaction Homme-Machine, pp 207--214.
Ala Goyé, 2003. Interfaces multimodales pour un assistant au voyage. Caen.
Dirk Heylen, Elisabetta Bevacqua, Marion Tellier & Catherine Pelachaud, 2007. Searching for prototypical facial feedback signals. In Intelligent Virtual Agents, pp 147--153. Paris, France.
Magalie Ochs, David Sadek & Catherine Pelachaud, 2007. Vers un modéle formel des émotions d'un agent rationnel dialoguant empathique. Paris, France.
Magalie Ochs, Catherine Pelachaud & David Sadek, 2006. Les conditions de déclenchement des émotions d’un agent conversationnel empathique. WACA’2006. Toulouse.
Jean-Hugues Réty, Jean-Claude Martin, Catherine Pelachaud & Nelly Bensimon, 1998. Coopération entre un hypermédia adaptatif éducatif et un agent pédagogique. Actes de H2PTM. Volume 3, Paris.
Philippe Gorce, Nasser Rezzoug, Alexis Heloir, Sylvie Gibet, Nicolas Courty, Jean-François Kamp, Franck Multon & Catherine Pelachaud, 2006. Agent virtuel signeur - Aide à la communication pour personnes sourdes. In 4éme conférence pour l'essor des technologies d'assistance, Handicap 2006, pp 1. Paris.
Hannes Vilhjàlmsson, Nathan Cantelmo, Justine Cassell, Nicolas E Chafai, Michael Kipp, Stefan Kopp, Maurizio Mancini, Stacy Marsella, Andrew N Marshall & Catherine and others Pelachaud, 2007. The behavior markup language: Recent developments and challenges. In Intelligent virtual agents, pp 99--111. Paris.
Yong Chen, 2015. Analyse et interprétation d'images à l'usage des personnes non-voyantes - Application à la génération automatique d'images en relied à partir d'équipements banalisés. Paris 8, Sous la direction de Jaime Lopez krahe. PDF
Mohammed Zbakh, 2014. Apports du numérique dans les outils de communication des personnes handicapées: développement d’un dictionnaire inversé: Langue des Signes Françaises-> Français. Paris 8, Sous la direction de Jaime Lopez krahe. PDF
Silvia Fajardo Flores, 2014. Modélisation des interactions non visuelles dans un environnement de travail mathématique visuel et non visuel synchronisé. Paris 8, Sous la direction de Dominique Archambault.
G. Uzan, E. Parvanova & Mballo Seck, 2011. Plateforme et serious game sur le recrutement ou le maintien dans lémploi des personnes handicapées : ergonomie, spécifications et cahiers des charges. pp 77.
G. Uzan, M. Mbodj, C. Megard & L. Brunet, 2011. Besoins des voyageurs dans les transports collectif: perceptions, interactions, et place du haptique dans la prise d'information. , projet TICTACT
G. Uzan, E. Parvanova & Mballo Seck, 2011. Plateforme et serious game sur le recrutement ou le maintien dans l'emploi des personnes handicapées : ergonomie, spécifications et cahiers des charges. pp 77.