Séminaire DIC-ISC-CRIA – 29 février 2024 par Alessandro LENCI

Alessandro LENCI – 29 février 2024

TITRE: The Grounding Problem in Language Models is not only about Grounding

RÉSUMÉ:

The Grounding Problem is typically assumed to concern the lack of referential competence of AI models. Language Models (LMs) that are trained only on texts without direct access to the external world are indeed rightly regarded to be affected by this limit, as they are ungrounded. On the other hand Multimodal LMs do have extralinguistic training data and show important abilities to link language with the visual world. In my talk, I will argue that incorporating multimodal data is a necessary but not sufficient condition to properly address the Grounding Problem. When applied to statistical models based on distributional co-occurrences like LMs, the Grounding Problem should be reformulated in a more extensive way, which sets an even higher challenge for current data-driven AI models.

BIOGRAPHIE:

Alessandro LENCI is Professor of linguistics and director of the Computational Linguistics Laboratory (CoLing Lab), University of Pisa. His main research interests are computational linguistics, natural language processing, semantics and cognitive science.

Lenci A., & Sahlgren (2023). Distributional Semantics, Cambridge, Cambridge University Press.

Lenci, A. (2018). Distributional models of word meaning. Annual review of Linguistics, 4, 151-171.

Lenci, A. (2023). Understanding Natural Language Understanding Systems. A Critical Analysis. Sistemi Intelligenti, arXiv preprint arXiv:2303.04229.

Lenci, A., & Padó, S. (2022). Perspectives for natural language processing between AI, linguistics and cognitive science. Frontiers in Artificial Intelligence, 5, 1059998.

Suivez-nous