{"id":845,"date":"2020-11-12T18:44:25","date_gmt":"2020-11-12T18:44:25","guid":{"rendered":"http:\/\/humaaine-chaireia.fr\/?page_id=845"},"modified":"2025-03-04T15:08:36","modified_gmt":"2025-03-04T15:08:36","slug":"publications","status":"publish","type":"page","link":"https:\/\/humaaine-chaireia.fr\/index.php\/publications\/","title":{"rendered":"PUBLICATIONS"},"content":{"rendered":"\n<ul class=\"wp-block-list\"><\/ul>\n\n\n\n<p><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Laurence Devillers<\/strong><\/h2>\n\n\n\n<p>Laurence Devillers est la pr\u00e9sidente de la chaire IA HUMAAINE. <br>Elle est enseignante-chercheuse en informatique appliqu\u00e9e aux sciences sociales \u00e0 Sorbonne Universit\u00e9, et chercheuse au Laboratoire Interdisciplinaire des Sciences du Num\u00e9rique (LISN) du CNRS<br><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Livres<\/strong><\/h2>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Articles<\/strong> <\/h2>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-1 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>2024<\/strong><br><br>Devillers L., Deschamps-Berger T., Lamel L.<br><strong>Emotions&rsquo; In the Wild&rsquo;of Emergency Cali Center Callers : Towards a Speech Emotion Recognition System<\/strong> <br>LANGAGES, 117 p.<br><br>Popescu A., Lamel L., Vasilescu I., Devillers L. <br><strong>An investigation of syllabe position\/l.allophony in L2 English learners using Word Error Rate as an index of phonetic proficiency<\/strong><br>13th International Seminar on Speech Production (ISSP2024)<br><br>Kalashnikova N., Vasilescu I., Devillers L.<br><strong>Linguistic nudges and verbal interaction with robots, smart-speakers, and humans<\/strong><br>Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), p. 10555-10564<br><br>Dengel A., Devillers L., Vargo A.W.<br><strong>The Future of Education with AI<\/strong><br>NII Shonan Meeting Report, 27 p.<br><br>Devillers L.<br><strong>Le langage non responsable des syst\u00e8mes d&rsquo;intelligence artificielle (IA) g\u00e9n\u00e9rative<\/strong><br>Champ lacanien 28 (1), p. 133-138<br><br>Popescu A., Lamel L., Vasilescu I., Devillers L.<br><strong>Automatic Speech Recognition with parallel L1 and L2 acoustic phone models to evaluate\/l\/allophony in L2 English speech production<\/strong><br>Proc. Interspeech 2024, 1015-1019<br><br>Courtier-Orgogozo V., Devillers L.<br><strong>La sociedad ante los avances de las ciencias y de las t\u00e9cnicas<\/strong><br>Futuribes 458 (1), p.25-44<br><br>Devillers L., Deschamps-Berger T., Lamel L.<br><strong>Les \u00e9motions <em>in the wild<\/em><\/strong> <strong>des appelants d&rsquo;un centre d&rsquo;appels d&rsquo;urgence : vers un syst\u00e8me de d\u00e9tection des \u00e9motions dans la voix<\/strong><br>Langages 234 (2), p. 117-134<br><br>Courtiez-Orgogozo V., Devillers L.<br><strong>Society and the challenge of scientific and technological advances<\/strong><br>Futuribes 458 (1), p.25-44<br><br><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p><strong>2023<\/strong><br><br>Aym\u00e9 S., Choquet R., Devillers L., Gilard M., Kelly-Irving M., Livartowski A., Lukacs B., Polton D.<br><strong>B\u00e9n\u00e9fices et risques de l&rsquo;utilisation des donn\u00e9es de sant\u00e9 \u00e0 des fins de recherche<\/strong><br>Conseil scientifique consultatif du Health Data Hub, rapport &#8211; Octobre 2023<br><br>Kalashnikova N., Hutin M., Vasilescu I., Devillers L.<br><strong>Do We Speak to Robots Looking Like Humans As We Speak to Humans ? A Study of Pitch in French Human-Machine and Human-Human Interactions<\/strong><br>Companion Publication of the 25th International Conference on Multimodal Interaction, p.141-145<br><br>Ispas A.R., Deschamps-Berger T., Devillers L.<br><strong>A multi-task, multi-modal approach for predicting categorical and dimensional emotions <\/strong><br>Companion Publication of the 25th International Conference on Multimodal Interaction, p.311-317<br><br>Deschamps-Berger T., Lamel L., Devillers L.<br><strong>Multiscale contextual learning for speech emotion recogntion in emergency call center conversations<\/strong><br>Companion Publication of the 25th International Conference on Multimodal Interaction, p. 337-343<br><br>Devillers L., Cowie R.<br><strong>Ethical considerations on affective computing : an overview<\/strong><br>Proceedings of the IEEE 111 (10), p.1445-1458<br><br>Feng Y., Devillers L.<br><strong>End-to-end continuous speech emotion recognition in real-life customer service call center conversations<\/strong><br>11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), p.1-8<br><br>Vargo A., Tag B., Hutin M., Abou-Khalil, Ishimaru S., Augereau O., Dingler T., Iwata M., Kise K., Devillers L., Dengel A.<br><strong>Intelligence augmentation : future directions and ethical implications in HCI<\/strong><br>IFIP Conference on Human-Computer Interaction, p.644-649<br><br>Kalashnikova N., Hutin M., Vasilescu I., Devillers L.<br><strong>The effect of human-likeliness in french robot-directed speech : A study of speech rate and fluency<\/strong><br>International Conference on Text, Speech, and Dialogue, p.249-257<br><br>Grinbaum A., Chatila R., Devillers L., Martin C., Kirchner C., Perrin J., Tessier C.<br><strong>Syst\u00e8mes d&rsquo;intelligence artificielle g\u00e9n\u00e9rative : enjeux d&rsquo;\u00e9thique<\/strong><br>Comit\u00e9 national pilote d&rsquo;\u00e9thique du num\u00e9rique, Avis 7<br><br>Kobylyanskaya S., Vasilescu I., Devillers L., Augereau O.<br><strong>Vers la compr\u00e9hension des difficult\u00e9s de lecture en L2 \u00e0 travers des param\u00e8tres acoustiques et de mouvement des yeux<\/strong><br>Environnements Informatiques pour l&rsquo;Apprentissage Humain (EIAH), 7 p.<br><br>Deschamps-Berger T., Lamel L., Devillers L.<br><strong>Exploring attention mechanisms for multimodal emotion recognition in an emergency call center corpus<\/strong><br>ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), p.1-5<br><br>Kalashnikova N., Hutin M., Vasilescu I., Devillers L.<br><strong>Do We Speak to Robots Looking Like Humans As We Speak to Humans ?<\/strong><br>Interactions. INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION INTERACTION (ICMI&rsquo;23 Companion), p.141-145<br><br>Devillers L.<br><strong>Les syst\u00e8mes d&rsquo;Intelligence Artificielle et le langage. Enjeux d&rsquo;\u00e9thique<\/strong><br>Raison pr\u00e9sente 228 (4), p.65-72<br><br>Kalashnikova N., Hutin M., Vasilescu I., Devillers L.<br><strong>Effet de l&rsquo;anthropomorphisme des machines sur le fran\u00e7ais adress\u00e9 aux robots : \u00c9tude du d\u00e9bit de parole et de la fluence<\/strong><br>ATALA, p.92-100<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-2 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<p><strong>2022<\/strong><br><br>Deschamps-Berger T., Lamel L., Devillers L.<br><strong>Investigating Transformer Encoders and Fusion Strategies for Speech Emotion Recogntion in Emergency Call Center Conversations<\/strong><br>Companion Publication of the 2022 International Conference on Multimodal Interaction, p.144-153<br><br>Devillers L.<br><strong>Social and emotional robots in healthcare<\/strong><br>Bulletin de l&rsquo;Acad\u00e9mie Nationale de M\u00e9decine 206 (8), p.1122-1123<br><br>Hutin M., Kobylyanskaya S., Devillers L.<br><strong>Nudges in Technology-Mediated Knowledge Transfer : Two Experimental Designs<\/strong><br>Adjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers, p. 267-273<br><br>Vargo A., Abou Khalil V., Ishimaru S., Tag B., Hutin M., Dengel A., Devillers L., Kise K.<br><strong>Delivering sensing technologies for education and learning<\/strong><br>Adjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers, p.263-266<br><br>Vargo A., Iwata M., Hutin M., Kobylyanskaya S., Vasilescu I., Augereau O., Watanabe K. Ishimaru S., Tag B., Dingler T., Kise K., Devillers L., Dengel A.<br><strong>Learning cyclotron : An ecosystem of knowledge circulation<\/strong><br>Adjunct Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2022 ACM International Symposium on Wearable Computers, p.308-312<br><br>Kalashnikova N., Pajak S., Le Guel F., Vasilescu I., Serrano G., Devillers L.<br><strong>Corpus design for studying linguistic nudges in human-computer spoken interactions<\/strong><br>Thirteenth Language Resources and Evaluation Conference (LREC 2022), p.4079-4087<br><br>El Baha M., Augereau O., Kobylyanskaya S., Vasilescu I., Devillers L.<br><strong>Eye got it : a system for automatic calculation of the eye-voice span<\/strong><br>International Workshop on Document Analysis Systems, p.713-725<br><br>Devillers L., Blandin-Obernesser A., Gentina E., Le Guel F., Robert M., Chardel P.A<br><strong>Portrait (s) de France (s)<\/strong> <strong>: Num\u00e9rique, quels enjeux pour la soci\u00e9t\u00e9<\/strong> <strong>?<\/strong><br>The Conversation France<br><br>Devillers L., Grinbaum A.<br><strong>La parole des agents artificiels : questions \u00e9thiques<\/strong><br><em>Pour une \u00e9thique du num\u00e9rique<\/em>, Presses Universitaires de France, p.185-198<br><br>Chatila R., Devillers L., Dognin-Sauze K., Ganascia J.G., Gornet M., Pronesti A., Tessier C.<br><strong>Pourquoi la reconnaissance faciale, posturale et comportementale soul\u00e8ve-t-elle des questionnements \u00e9thiques ?<\/strong><br><em>Pour une \u00e9thique du num\u00e9rique,<\/em> Presses Universitaires de France, p.209-222<br><br>L. Devillers<br><strong>Les robots sociaux et affectifs en sant\u00e9<\/strong><br>Bulletin de l&rsquo;Acad\u00e9mie Nationale de M\u00e9decine 206, p.1122-1123<br><br><br><br><br><br><br><br><br><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:50%\">\n<p><strong>2021<\/strong><\/p>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Ioana<\/strong> <strong>Vasilescu<\/strong><\/h2>\n\n\n\n<p>Ioana Vasilescu est chercheuse et directrice de recherche au Laboratoire Interdisciplinaire des Sciences du Num\u00e9rique (LISN) du CNRS<\/p>\n\n\n\n<p><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Gilles Ada<\/strong><\/h2>\n\n\n\n<p>Gilles Ada est chercheur et directeur de recherche au Laboratoire Interdisciplinaire des Sciences du Num\u00e9rique (LISN) du CNRS<\/p>\n\n\n\n<p><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Hugues Ali Mehenni<\/strong><\/h2>\n\n\n\n<p>Hugues Ali Mehenni est chercheur en Machine Learning, ayant r\u00e9alis\u00e9 sa th\u00e8se au Laboratoire Interdisciplinaire des Sciences du Num\u00e9rique (LISN) du CNRS<\/p>\n\n\n\n<p><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Eric Bilinski<\/strong><\/h2>\n\n\n\n<p>Eric Bilinski est ing\u00e9nieur en informatique au Laboratoire Interdisciplinaire des Sciences du Num\u00e9rique (LISN) du CNRS<\/p>\n\n\n\n<p><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Serge Pajak<\/strong><\/h2>\n\n\n\n<p>Serge Pajak est \u00e9conomiste au laboratoire R\u00e9seaux, Innovation, Territoires et Innovation (RITM) de l&rsquo;Universit\u00e9 Paris-Saclay<\/p>\n\n\n\n<p><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Fabrice Le Guel<\/strong><\/h2>\n\n\n\n<p>Fabrice Le Guel est chercheur au laboratoire R\u00e9seaux, Innovation, Territoires et Innovation (RITM), et ma\u00eetre de conf\u00e9rences en \u00e9conomie \u00e0 l&rsquo;Universit\u00e9 Paris-Saclay<\/p>\n\n\n\n<p><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Th\u00e9o Marquis<\/strong><\/h2>\n\n\n\n<p>Th\u00e9o Marquis est \u00e9conomiste, et a r\u00e9alis\u00e9 sa th\u00e8se au sein du laboratoire R\u00e9seaux, Innovation, Territoires et Innovation (RITM) de l&rsquo;Universit\u00e9 Paris-Saclay<\/p>\n\n\n\n<p><br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Sofiya Kobylyanskaya<\/strong><\/h2>\n\n\n\n<p>Sofiya Kobylyanskaya est chercheuse, ayant r\u00e9alis\u00e9 sa th\u00e8se au Laboratoire Interdisciplinaire des Sciences du Num\u00e9rique (LISN) du CNRS<\/p>\n\n\n\n<p><br><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Laurence Devillers Laurence Devillers est la pr\u00e9sidente de la chaire IA HUMAAINE. Elle est enseignante-chercheuse en informatique appliqu\u00e9e aux sciences sociales \u00e0 Sorbonne Universit\u00e9, et&hellip;<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-845","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/humaaine-chaireia.fr\/index.php\/wp-json\/wp\/v2\/pages\/845"}],"collection":[{"href":"https:\/\/humaaine-chaireia.fr\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/humaaine-chaireia.fr\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/humaaine-chaireia.fr\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/humaaine-chaireia.fr\/index.php\/wp-json\/wp\/v2\/comments?post=845"}],"version-history":[{"count":4,"href":"https:\/\/humaaine-chaireia.fr\/index.php\/wp-json\/wp\/v2\/pages\/845\/revisions"}],"predecessor-version":[{"id":1585,"href":"https:\/\/humaaine-chaireia.fr\/index.php\/wp-json\/wp\/v2\/pages\/845\/revisions\/1585"}],"wp:attachment":[{"href":"https:\/\/humaaine-chaireia.fr\/index.php\/wp-json\/wp\/v2\/media?parent=845"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}