Auteur : Dagenais, Mylène

Séminaire DIC-ISC-CRIA - 27 novembre 2025 par Chloe CLAVEL

Chloe CLAVEL - 27 novembre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE: Affective Computing and Emotional Understanding: Beyond the Cold Logic of the Turing Test

RÉSUMÉ

This talk examines how affective computing can transcend the traditional boundaries of the Turing Test by incorporating emotional understanding and socio-affective intelligence into AI systems. While the classic Turing Test evaluates a machine's ability to exhibit intelligent behavior indistinguishable from humans through purely linguistic exchanges, I will argue for expanding this paradigm to include emotional and social competencies. Drawing from recent advances in multimodal emotion recognition, social signal processing, and human-agent interaction, I will present computational models that capture not just what people say, but how they feel and the social context of their interactions. The discussion will cover our work on modeling socio-emotional behaviors including trust, engagement, and social stances in conversational AI systems, demonstrating how machines can be designed to recognize, understand, and appropriately respond to human emotions. I will address the challenges of moving beyond "cold logic" to develop socially intelligent systems that can navigate the nuanced landscape of human emotional expression, ultimately arguing that true artificial intelligence must incorporate affective understanding to be genuinely useful and acceptable to human users.

BIOGRAPHIE

Chloe CLAVEL is Senior Researcher (Directrice de recherche) at INRIA Paris in the ALMAnaCH team, focusing on Affective Computing and Artificial Intelligence. Until October 2023, she was Professor of Affective Computing at LTCI, Telecom-Paris, Institut Polytechnique de Paris, where she coordinated the Social Computing team. Her research lies at the intersection of multiple disciplines including speech and natural language processing, machine learning, and social robotics. Clavel studies computational models of socio-emotional behaviors--sentiments, social stances, engagement, trust--in both human-human and human-agent interactions. Her work spans multimodal emotion recognition, opinion analysis, social signal processing, and conversational AI systems. She is motivated by applications in health and education where affective computing can empower people and improve quality of life. Clavel has contributed to numerous European and national collaborative projects and serves as program chair for major AI conferences.

RÉFÉRENCES

Chenain, L., Bachoud-Lévi, A.-C., & Clavel, C. (2024). Acoustic Characterization of Huntington's Disease Emotional Expression: An Explainable AI Approach. ACIIW 2024.


Clavel, C., Labeau, M., & Cassell, J. (2022). Socio-conversational systems: Three challenges at the crossroads of fields. Frontiers in Robotics and AI, 9, 737173.


Guo, Y., Suchanek, F., & Clavel, C. (2024). The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text. Findings of NAACL.


Guibon, G., Labeau, M., Flamein, H., Lefeuvre, L., & Clavel, C. (2021). Few-Shot Emotion Recognition in Conversation with Sequential Prototypical Networks. EMNLP.

Séminaire DIC-ISC-CRIA - 20 novembre 2025 par Ari HOLTZMAN

Ari HOLTZMAN- 20 novembre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE:  Articulating the Ineffable: The Analytic Turn in Generative AI

RÉSUMÉ

Generative AI has taken an analytic turn: we now cultivate models from objectives and data, then try to understand what we’ve grown. Current approaches to studying LLMs—focused on engineering progress or mechanistic explanations at the implementation level-—are insufficient for grasping their emergent behaviors. I will discuss what it means for interpretability approaches to be predictive rather than mechanistic, the changing landscape of machine communication, and efforts to identify fundamental laws that govern LLM behavior. I will argue that developing precise behavioral vocabulary and conceptual frameworks is the only way to turn the ‘fieldwork’ of finding surface regularities in LLMs into a science of LLMs. The guiding questions are basic, empirical, and exploratory: what do models consistently do, what do they reliably miss, and how do they incorporate and store new information? Along the way we’ll discover that AI has been given a new mandate—to articulate the ineffable, by describing aspects of communication and computation that we previously had no words for because they were stuck to deep inside human cognition to be easily referenced.

BIOGRAPHIE

Ari HOLTZMAN is Assistant Professor of Computer Science and Data Science at the University of Chicago, where he directs the Conceptualization Lab. His research is on developing new conceptual frameworks for understanding generative models, treating them as complex systems rather than traditional engineering artifacts. He introduced nucleus sampling, a text generation algorithm used in deployed systems including the OpenAI API.

RÉFÉRENCES

Holtzman, A., et al. (2023). Generative Models as a Complex Systems Science. arXiv:2308.00189.

Holtzman, A., Buys, J., Du, L., Forbes, M., & Choi, Y. (2019). The curious case of neural text degeneration. International Conference on Learning Representations (ICLR).

West, P., Holtzman, A., Hessel, J., Chandu, K., & Choi, Y. (2021). Symbolic knowledge distillation: from general language models to commonsense models. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics.

Holtzman, A., West, P., Shwartz, V., Choi, Y., & Zettlemoyer, L. (2021). Surface form competition: Why the highest probability answer isn't always right. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.

Séminaire DIC-ISC-CRIA - 13 novembre 2025 par Rufin VANRULLEN

Rufin VANRULLEN - 13 novembre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE: Global Workspace Theory Meets Deep Learning: Consciousness as Computational Architecture or Biological Phenomenon?

RÉSUMÉ

This talk examines the convergence of Global Workspace Theory and deep learning architectures in the quest to understand and potentially implement consciousness in artificial systems. I will present our recent work on Global Latent Workspace (GLW) models that bridge computational implementations of consciousness theories with state-of-the-art machine learning. The discussion will explore how these architectures integrate multimodal information processing through a central latent hub, enabling cross-modal translation and globally accessible representations. I will address the fundamental question of whether consciousness emerges from specific computational architectures or requires biological substrates, drawing on evidence from our implementations that combine global workspace dynamics with sensorimotor contingency theory. The talk will also examine the implications for AI consciousness assessment, discussing indicator properties derived from neuroscientific theories and their application to current AI systems. Finally, I will consider the ethical dimensions of potentially conscious AI and the importance of rigorous empirical approaches to machine consciousness research.

BIOGRAPHIE

Rufin ANRULLEN is CNRS Research Director in neuroscience and artificial intelligence at the Centre de Recherche Cerveau et Cognition (CerCo) and holds a research chair at the Artificial and Natural Intelligence Toulouse Institute (ANITI). His research focuses on brain-inspired AI architectures, visual perception, attention, and consciousness. Following mathematics and computer science studies, he completed his PhD in cognitive science with Simon Thorpe, then conducted postdoctoral research at Caltech with Christof Koch on visual attention mechanisms. He received the CNRS Bronze Medal in 2007 and was awarded a 2022 ERC Advanced Grant for his project “GLoW – The Global Latent Workspace.” VanRullen has authored over 200 scientific papers and is a leading researcher in computational approaches to consciousness and neural oscillations in perception.

RÉFÉRENCES

Kuske, N., & VanRullen, R. (2024). Consciousness in Artificial Systems: Bridging Global Workspace and Sensorimotor Theory in In-Silico Models. arXiv preprint.

Devillers, B., Maytié, L., & VanRullen, R. (2024). Semi-Supervised Multimodal Representation Learning Through a Global Workspace. IEEE Transactions on Neural Networks and Learning Systems.

Butlin, P., Long, R., Elmoznino, E., et al. [including VanRullen, R.] (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. arXiv preprint.

VanRullen, R., & Kanai, R. (2021). Deep learning and the global workspace theory. Trends in Neurosciences, 44(9), 692-704.

Séminaire DIC-ISC-CRIA - 6 novembre 2025 par Cameron JONES

Cameron JONES - 6 novembre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE: Do LLMs pass the Turing test? And what does it mean if they do?

RÉSUMÉ

Large Language Models (LLMs) seem well designed for the Turing test in that they can produce fluid naturalistic text. Many have suggested that they would pass the test or implicitly already have. We addressed this question empirically by evaluating several LLMs in a standard three-party 5 minute Turing test. Two models, when prompted to adopt a humanlike persona, achieved a pass rate of 50%: suggesting that interrogators were no better than chance at distinguishing between humans and LLMs. One of these models (GPT-4.5) was judged to be human 73% of the time, significantly more often than the real humans it was being compared to. These results suggest that LLMs pass the Turing test, but what does that mean? I will discuss potential interpretations of these results, including whether they suggest that LLMs are intelligent, produce humanlike behaviour, or are merely exploiting superficial cues.

BIOGRAPHIE

Cameron JONES is an Assistant Professor in the Psychology Department at Stony Brook University. His research focuses on the intersection between psychology and AI: using paradigms from psychology to compare human and AI behaviour, using AI to understand how people interact with each other, and investigating the impact that AI might have on our psychology longer term. His recent work has focussed on evaluating social intelligence in LLMs (including theory of mind and more interactive social tasks), investigating the extent to which AI systems can manipulate and deceive people (as well as the role that trust and rapport play in those interactions), and evaluating LLMs in the Turing test.

RÉFÉRENCES

Jones, C. R., & Bergen, B. K. (2025). Large language models pass the turing testarXiv preprint arXiv:2503.23674

Jones, C. R., Rathi, I., Taylor, S., & Bergen, B. K. (2025). People cannot distinguish GPT-4 from a human in a Turing test. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (pp. 1615-1639).

Séminaire DIC-ISC-CRIA - 30 octobre 2025 par Yonatan BISK

Yonatan BISK - 30 octobre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE: Embodied Language: Evaluating LLMs in the Real World

RÉSUMÉ

This talk examines the critical challenge of evaluating Large Language Models in interactive, embodied settings where language must connect to physical actions and environmental understanding. Drawing from recent research in embodied AI and language grounding, I will explore how current LLMs perform when tasked with interpreting language instructions that require spatial reasoning, object manipulation, and social interaction. The discussion will cover methodological frameworks for assessing language-to-action capabilities, including benchmarks that move beyond traditional text-based evaluation to encompass multimodal environments where language commands must be translated into executable actions. The talk will address fundamental questions about what it means for AI systems to truly understand language in the context of physical agency, examining both the successes and systematic failures of LLMs in interactive settings that require grounded communication and sensorimotor integration.

BIOGRAPHIE

Yonatan BISK is Assistant Professor at Carnegie Mellon University's Language Technologies Institute and Robotics Institute, where he founded the REAL Center (Robotics, Embodied AI, and Learning). His research focuses on grounded and embodied natural language processing, exploring how language interacts with vision, action, and reasoning in physical environments. Bisk earned his PhD from the University of Illinois at Urbana-Champaign in unsupervised grammar induction and held postdoctoral positions at USC's Information Sciences Institute, University of Washington, and Allen Institute for AI. He has been a visiting researcher at Microsoft Research and Meta AI. He teaches courses on "Talking to Robots" and "Multimodal Machine Learning."

RÉFÉRENCES

Mecattaf, M. G., Slater, B., Tešić, M., Prunty, J., Voudouris, K., & Cheke, L. G. (2024). A little less conversation, a little more action, please: Investigating the physical common-sense of LLMs in a 3D embodied environment. arXiv.

Wu, Y., Min, S. Y., Bisk, Y., Salakhutdinov, R., Azaria, A., Li, Y., Mitchell, T., & Prabhumoye, S. (2023). Plan, Eliminate, and Track – Language Models are Good Teachers for Embodied Agents. arXiv.

Bisk, Y., Zellers, R., Gao, J., & Choi, Y. (2020). PIQA: Reasoning about Physical Commonsense in Natural Language. In Proceedings of the AAAI Conference on Artificial Intelligence, 34, 7432–7439.

Shridhar, M., Thomason, J., Gordon, D., Bisk, Y., Han, W., Mottaghi, R., Zettlemoyer, L., & Fox, D. (2020). ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

Bisk, Y., Holtzman, A., Thomason, J., et al. (2020). Experience Grounds Language. EMNLP.

Séminaire DIC-ISC-CRIA - 23 octobre 2025 par Terry SEJNOWSKI

Terry SEJNOWSKI - 23 octobre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE: The Convergence of Neuroscience and Artificial Intelligence

RÉSUMÉ

This talk explores the revolutionary convergence of neuroscience and artificial intelligence in the emerging field of NeuroAI. Drawing from recent breakthroughs in large language models like ChatGPT, I will examine how computational principles derived from brain function are informing next-generation AI systems, while simultaneously showing how AI tools are advancing our understanding of neural computation. The discussion will cover the bidirectional flow of insights between transformer architectures and cortical traveling waves, demonstrating how self-attention mechanisms in AI parallel the brain's encoding of temporal context. I will present evidence from our recent work on predictive sequence learning in the hippocampus and how neural prediction errors mirror computational processes in modern AI. The talk will address fundamental questions about the embodied Turing test and whether AI systems can achieve the sensorimotor intelligence that evolved over 500 million years. Finally, I will discuss the implications of this convergence for understanding consciousness, memory consolidation during sleep, and the future of human-AI collaboration in scientific discovery.

BIOGRAPHIE

Terry SEJNOWSKI is Francis Crick Professor at the Salk Institute for Biological Studies and Distinguished Professor at UC San Diego, where he co-directs the Institute for Neural Computation. A computational neuroscientist, he co-invented the Boltzmann machine with Geoffrey Hinton in the 1980s. Sejnowski is President of the Neural Information Processing Systems (NeurIPS) Foundation and founding editor-in-chief of Neural Computation (MIT Press). He has authored over 500 scientific papers and 12 books, including "The Deep Learning Revolution" (2018) and "ChatGPT and the Future of AI" (2024). Recent honors include the 2024 Brain Prize for computational neuroscience, the 2022 Gruber Neuroscience Prize, and election to all four U.S. National Academies. He contributed to the NIH BRAIN Initiative and co-created the online course "Learning How to Learn."

RÉFÉRENCES

Sejnowski, T. J. (2025). Thinking About Thinking: AI offers theoretical insights into human memory. The Transmitter.


Muller, L., Churchland, P.S., & Sejnowski, T.J. (2024). Transformers and cortical waves: encoders for pulling in context across time. Trends in Neurosciences.


Chen, Y., Zhang, H., Cameron, M., & Sejnowski, T. (2024). Predictive sequence learning in the hippocampal formation. Neuron.


Zador, A., Escola, S., Richards, B., et al. [including Sejnowski, T.] (2023). Catalyzing next-generation Artificial Intelligence through NeuroAI. Nature Communications, 14, 1597.

Séminaire DIC-ISC-CRIA - 16 octobre 2025 par Jean-Baptiste MOURET

Jean-Baptiste MOURET - 16 octobre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE: Agents adaptatifs incarnés : implications pour l'ancrage

RÉSUMÉ

Cette conférence explore comment les agents adaptatifs incarnés peuvent éclairer notre   compréhension de l'ancrage - la connexion fondamentale entre les symboles et leurs significations dans le monde physique. En m'appuyant sur les récentes avancées en algorithmes de diversité qualitative et répertoires comportementaux, j'examinerai comment  les robots capables de s'adapter rapidement aux contraintes physiques et aux  changements environnementaux offrent des perspectives sur la nature incarnée de l'intelligence. La discussion portera sur la façon dont l'algorithme MAP-Elites et l'apprentissage par essais-erreurs intelligents permettent aux robots de découvrir des solutions comportementales diverses et de s'adapter en quelques minutes à des circonstances imprévues, y compris les dommages physiques. Je présenterai des preuves de robots hexapodes, de plateformes humanoïdes et de manipulateurs robotiques qui démontrent comment l'adaptation incarnée crée des mappages symbole-monde significatifs par l'expérience sensori-motrice directe. Les implications s'étendent au-delà de la robotique aux questions de sciences cognitives sur la façon dont l'incarnation physique contraint et permet l'ancrage symbolique, offrant de nouvelles perspectives sur la relation entre comportement adaptatif, interaction environnementale et émergence de représentations significatives dans les systèmes artificiels et biologiques.

BIOGRAPHIE

Jean-Baptiste MOURET est directeur de recherche à Inria Nancy - Grand Est et membre de l'équipe LARSEN (Lifelong Autonomy and interaction skills for Robots in a Sensing ENvironment). Il est également affilié au CNRS, laboratoire Loria. Lauréat d'une bourse ERC Starting Grant en 2014, ses recherches portent sur l'apprentissage automatique et le calcul évolutionnaire pour concevoir des robots adaptatifs capables d'apprendre et de s'adapter par essais-erreurs. Mouret a développé les algorithmes de diversité qualitative, notamment l'algorithme MAP-Elites, qui permet aux robots de s'adapter rapidement aux dommages et aux nouvelles situations. Son travail "Robots that can adapt like animals", publié en couverture de Nature en 2015, a démontré comment les robots peuvent récupérer de dommages en moins de deux minutes en utilisant des répertoires comportementaux. Il a occupé des postes de professeur invité à Cornell University, University of Vermont et Technical University Darmstadt.

RÉFÉRENCES

Anne, T., & Mouret, J.-B. (2024). Parametric-Task MAP-Elites. Proc. of GECCO. ACM.


Zhong, J., Weistroffer, V., Mouret, J.-B., Colas, F., & Maurice, P. (2023). Workstation Suitability Maps: Generating Ergonomic Behaviors on a Population of Virtual Humans with Multi-task Optimization. IEEE Robotics and Automation Letters.


Kaushik, R., Desreumaux, P., & Mouret, J.-B. (2020). Adaptive Prior Selection for Repertoire-based Online Learning in Robotics. Frontiers in Robotics and AI.


Cully, A., Clune, J., Tarapore, D., & Mouret, J.-B. (2015). Robots that can adapt like animals. Nature, 521(7553), 503-507.

Séminaire DIC-ISC-CRIA - 9 octobre 2025 par Sean TROTT

Sean TROTT - 9 octobre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE : Epistemological challenges in the study of “Theory of Mind” in LLMs and humans

RÉSUMÉ 

Humans reason about others’ beliefs—a key aspect of Theory of Mind. Can this emerge from language statistics alone? I present evidence that large language models show some sensitivity to implied belief states in text, though consistently below human levels. This suggests distributional learning is partly but not fully sufficient for Theory of Mind. I then examine epistemological challenges in treating LLMs as “model organisms,” including construct validity and distinguishing genuine generalization from pattern-matching. I argue that addressing these challenges opens opportunities for methodological innovation in both Cognitive Science and AI.

BIOGRAPHIE

Sean TROTT, Assistant Teaching Professor in the Department of Cognitive Science, and the Computational Social Science program at UC San Diego, uses Large Language Models (LLMs) as "model organisms" to study human language and cognition ("LLM-ology"). He investigates how computational models of language can cast light on meaning representation, Theory of Mind, pragmatic inference, and lexical ambiguity. He combines behavioral experiments, computational modeling, and corpus analysis to study how humans process language and how LLMs can serve as cognitive models. Trott is also founder of "The Counterfactual," a newsletter exploring the intersection of Cognitive Science, AI, and methodology.

RÉFÉRENCES

Jones, C. R., Trott, S., & Bergen, B. (2024). Comparing humans and large language models on an Experimental Protocol Inventory for Theory of Mind Evaluation (EPITOME)Transactions of the Association for Computational Linguistics12, 803-819. 

Trott, S., Jones, C., Chang, T., Michaelov, J., & Bergen, B. (2023). Do large language models know what humans know? Cognitive Science47(7), e13309.

Séminaire DIC-ISC-CRIA - 25 septembre 2025 par Chris POTTS

Chris POTTS - 25 septembre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE : Meaning in Large Language Models: Bridging Formal Semantics, Pragmatics, and Learned Representations

RÉSUMÉ 

In its modern form, semantics (the study of the conventionalized aspects of linguistic meaning) is firmly rooted in symbolic logic.  Such logics are also a cornerstone of pragmatics (the study of how people create meaning together in interaction). We can trace this methodological orientation to the roots of these fields in mathematical logic and the philosophy of language. This origin story has profoundly shaped both semantics and pragmatics at every level. How would these fields have looked had they instead been rooted in connectionism? They would have been radically different: the distinction between semantics and pragmatics would fall away, the range of relevant empirical phenomena would expand, and the theories themselves would have greater predictive force. This is not to say that there would be no role for symbolic logic in this hypothetical connectionist “semprag.” Large language models do learn solutions that reflect existing symbolic theories of meaning, and this is key to their success. This points to a future in which the fields of semantics and pragmatics embrace much more of what is happening in AI – without, however giving up their roots in symbolic logic.

BIOGRAPHIE

Christopher POTTS is Professor of Linguistics and, by courtesy, of Computer Science at Stanford, and a faculty member in the Stanford NLP Group and the Stanford AI Lab. His research group uses computational methods to explore topics in context-dependent language use, systematicity and compositionality, model interpretability, information retrieval, and foundation model programming. This research combines methods from linguistics, cognitive psychology, and computer science, in the service of both scientific discovery and technology development. Chris is also Co-Founder and Chief Scientist at Bigspin AI, a start-up focused on collaborative development of AI systems.

RÉFÉRENCES:

Arora, A., Jurafsky, D., & Potts, C. (2024). CausalGym: Benchmarking causal interpretability methods on linguistic tasks. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, 14638--14663.

Kallini, J., Papadimitriou, I., Futrell, R., Mahowald, K., & Potts, C. (2024). Mission: Impossible Language Models. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.

Huang, J., Wu, Z., Potts, C., Geva, M., & Geiger, A. (2024). RAVEL: Evaluating interpretability methods on disentangling language model representations. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, 8669--8687.

Séminaire DIC-ISC-CRIA - 18 septembre 2025 par Roger LEVY

Roger LEVY - 18 septembre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE :  Behavioral evaluation of language models as models of human sentence processing

RÉSUMÉ 

Cette conférence examine comment les grands modèles de langage peuvent servir de modèles computationnels du traitement humain des phrases, en se concentrant sur les méthodes d'évaluation comportementale qui comparent les prédictions des modèles avec les données psycholinguistiques humaines. Je discuterai de travaux récents montrant que les mesures de probabilité directes des modèles de langage fournissent souvent de meilleures perspectives sur les connaissances linguistiques que les évaluations basées sur les invites. La conférence couvrira les considérations méthodologiques pour utiliser la théorie de la surprise et d'autres mesures théoriques de l'information pour valider les LLMs comme modèles cognitifs, examinant à la fois les promesses et les limites des modèles de langage neuraux actuels dans la capture des mécanismes de traitement des phrases humaines. Je présenterai des preuves sur la façon dont les mesures dérivées des modèles de difficulté de traitement s'alignent avec les données de temps de lecture humaines et discuterai des implications pour la science cognitive et le traitement du langage naturel.

This talk examines how large language models can serve as computational models of human sentence processing, focusing on behavioral evaluation methods that compare model predictions with human psycholinguistic data. I will discuss recent work showing that direct probability measurements from language models often provide better insights into linguistic knowledge than prompting-based evaluations. The talk will cover methodological considerations for using surprisal theory and other information-theoretic measures to validate LLMs as cognitive models, examining both the promises and limitations of current neural language models in capturing human sentence processing mechanisms. I will present evidence on how model-derived measures of processing difficulty align with human reading time data and discuss implications for both cognitive science and natural language processing.

BIOGRAPHIE

Roger LEVYest professeur de sciences du cerveau et cognitives au MIT, où il dirige le laboratoire de psycholinguistique computationnelle. Ses recherches portent sur des questions théoriques et appliquées dans le traitement et l'acquisition du langage naturel, étudiant comment la communication linguistique résout l'incertitude sur des signaux et significations potentiellement illimités. Il combine la modélisation computationnelle, l'expérimentation psycholinguistique et l'analyse de grands ensembles de données linguistiques naturelles pour comprendre les fondements cognitifs du traitement du langage et aider à concevoir de meilleurs systèmes de traitement automatique du langage. Avant de rejoindre le MIT en 2016, il a fondé un laboratoire de psycholinguistique computationnelle à UC San Diego. Il est actuellement président de la Cognitive Science Society (2024--2025).

Roger LEVY is Professor of Brain and Cognitive Sciences at MIT, where he heads the Computational Psycholinguistics Laboratory. His research focuses on theoretical and applied questions in the processing and acquisition of natural language, investigating how linguistic communication resolves uncertainty over potentially unbounded signals and meanings. He combines computational modeling, psycholinguistic experimentation, and analysis of large naturalistic language datasets to understand cognitive underpinnings of language processing and to help design better machine language processing systems. Before joining MIT in 2016, he founded a Computational Psycholinguistics Laboratory at UC San Diego. He currently serves as President of the Cognitive Science Society (2024--2025).

RÉFÉRENCES

Hu, J., & Levy, R. (2023). Prompting is not a substitute for probability measurements in large language models. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 5040--5060.

Shain, C., Meister, C., Pimentel, T., Cotterell, R., & Levy, R. P. (2024). Large-scale evidence for logarithmic effects of word predictability on reading time. Proceedings of the National Academy of Sciences, 121(10), e2307876121.

Wilcox, E. G., Futrell, R., & Levy, R. (2023). Using Computational Models to Test Syntactic Learnability. Linguistic Inquiry, 1--44.

Futrell, R., Gibson, E., & Levy, R. P. (2020). Lossy-context surprisal: An information-theoretic model of memory effects in sentence processing. Cognitive Science, 44, 1--54.

Suivez-nous