Séminaire DIC-ISC-CRIA - 9 octobre 2025 par Sean TROTT

Sean TROTT - 9 octobre 2025 à 10h30 au PK-5115 (201, ave President-Kennedy, 5e étage)

TITRE : Epistemological challenges in the study of “Theory of Mind” in LLMs and humans

RÉSUMÉ 

Humans reason about others’ beliefs—a key aspect of Theory of Mind. Can this emerge from language statistics alone? I present evidence that large language models show some sensitivity to implied belief states in text, though consistently below human levels. This suggests distributional learning is partly but not fully sufficient for Theory of Mind. I then examine epistemological challenges in treating LLMs as “model organisms,” including construct validity and distinguishing genuine generalization from pattern-matching. I argue that addressing these challenges opens opportunities for methodological innovation in both Cognitive Science and AI.

BIOGRAPHIE

Sean TROTT, Assistant Teaching Professor in the Department of Cognitive Science, and the Computational Social Science program at UC San Diego, uses Large Language Models (LLMs) as "model organisms" to study human language and cognition ("LLM-ology"). He investigates how computational models of language can cast light on meaning representation, Theory of Mind, pragmatic inference, and lexical ambiguity. He combines behavioral experiments, computational modeling, and corpus analysis to study how humans process language and how LLMs can serve as cognitive models. Trott is also founder of "The Counterfactual," a newsletter exploring the intersection of Cognitive Science, AI, and methodology.

RÉFÉRENCES

Jones, C. R., Trott, S., & Bergen, B. (2024). Comparing humans and large language models on an Experimental Protocol Inventory for Theory of Mind Evaluation (EPITOME)Transactions of the Association for Computational Linguistics12, 803-819. 

Trott, S., Jones, C., Chang, T., Michaelov, J., & Bergen, B. (2023). Do large language models know what humans know? Cognitive Science47(7), e13309.

Suivez-nous