Séminaire du DIC

Séminaire DIC-ISC-CRIA – 4 avril 2024 par Piek VOSSEN

Piek Vossen - 4 avril 2024

Titre : Referential Grounding

Résumé :

LLMs or “Foundation models” are good at generalizing from observations but are they also good at individuation, reference and remembering? Grounding is often interpreted as an association across modalities. Multimodal models learn through fusion and co-attention from paired signals such as images and textual descriptions. But if the representation of each modality is a generalization what does that tell us about the referential grounding of individual people and objects in specific situations? Explicit extensional individuation of things and situations is a fundamental problem for LLMs because they are continuous and not discrete. In my research, I focus on identity, reference and perspective by analyzing different ways of framing in text that describe the same referentially grounded events and by developing embodied conversational AI models that create an extensional memory by observation and communication within real world environments.

Biographie :

Piek Vossen is Professor of Computational Lexicology at the Vrije Universiteit Amsterdam, where he directs the Computational Linguistics and Text Mining Lab. His research focuses on modeling understanding of language by machines. Within the Hybrid Intelligence program, he currently investigates how human and AI memories can be aligned through communication and their differences can be leveraged for collaborative tasks.

Références :

L. Remijnse, P. Vossen, A. Fokkens, and S. Titarsolej, Introducing frege to fillmore: a framenet dataset that captures both sense and reference, 2022, Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022), pages 39–50

P. Vossen, F. Ilievski, M. Postma, A. Fokkens, G. Minnema, and L. Remijnse, “Large-scale cross-lingual language resources for referencing and framing,” in Proceedings of the 12th language resources and evaluation conference, 2020, p. 3162–3171

S. B. Santamaría, T. Baier, T. Kim, L. Krause, J. Kruijt, and P. Vossen, “EMISSOR: A platform for capturing multimodal interactions as episodic memories and interpretations with situated scenario-based ontological references,” Proceedings of the first workshop beyond language: multimodal semantic representations, in conjunction with iwcs2022, 2021.

P. Vossen, L. Bajčetić, S. Báez Santamaria, S. Basić, and B. Kraaijeveld, “Modelling context awareness for a situated semantic agent,” in Proceedings of 11th international and interdisciplinary conference on modeling and using context, context 2019, 2019

Séminaire DIC-ISC-CRIA – 28 mars 2024 par Matt FREDRIKSON

Matt Fredrikson - 28 mars 2024

Titre : Transferable Attacks on Aligned Language Models

Résumé :

Large language models (LLMs) undergo extensive fine-tuning to avoid producing content that contradicts the intent of their developers. Several studies have demonstrated so-called "jailbreaks", or special queries that can still induce unintended responses, these require a significant amount of manual effort to design and are often easy to patch. In this talk, I will present recent research that looks to generate these queries automatically. By a combination of gradient-based and discrete optimization, we show that it is possible to generate an unlimited number of these attack queries for open-source LLMs. Surprisingly, the results of these attacks often transfer directly to closed-source, proprietary models that are only made available through APIs (e.g. ChatGPT, Bard, Claude)--despite substantial differences in model size, architecture, and training. These findings raise serious concerns about the safety of using LLMs in many settings, especially as they become more widely used in autonomous applications.

Biographie :

Matt Fredrikson’s research aims to enable systems that make secure, fair, and reliable use of machine learning. His group group focuses on finding ways to understand the unique risks and vulnerabilities that arise from learned components, and on developing methods to mitigate them, often with provable guarantees.

Références:

Zou, A., Wang, Z., Kolter, J. Z., & Fredrikson, M. (2023). Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043.

Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016, March). The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P) (pp. 372-387). IEEE.

Séminaire DIC-ISC-CRIA – 21 mars 2024 par Pierre-Yves OUDEYER

Pierre-Yves OUDEYER – 21 mars 2024

TITRE: Autotelic Agents that Use and Ground Large Language Models

RÉSUMÉ:

Developmental AI aims to design and study artificial agents that are capable of open-ended learning. I will discuss two fundamental ingredients: (1) curiosity-driven exploration mechanisms, especially mechanisms enabling agents to invent and sample their own goals (such agents are called ‘autotelic’; (2) language and culture enabling enabling agents to learn from others’ discoveries, through the internalization of cognitive tools. I will discuss the main challenges in designing autotelic agents (e.g., how can they be creative in choosing their own goals?) and how some of them require language and culture to be addressed. I will also discuss using LLMs as proxies for human culture in autotelic agents, and how autotelic agents can leverage LLMs to learn faster, but also to align and ground them on the dynamics of the environment they interact with. I will also address some of the current main limitations of LLMs.

BIOGRAPHIE:

Pierre-Yves OUDEYER and his team at INRIA Bordeaux study open lifelong learning and the self-organization of behavioral, cognitive and language structures, at the frontiers of AI and cognitive science. In the field of developmental AI, we use machines as tools to better understand how children learn, and to study how machines could learn autonomously as children do and could integrate into human cultures. We study models of curiosity-driven autotelic learning, enabling humans and machines to set their own goals and self-organize their learning program. We also work on applications in education and assisted scientific discovery, using AI techniques to serve humans, and encourage learning, curiosity, exploration and creativity.

T Karch, C Moulin-Frier, PY Oudeyer (2022) Language and Culture Internalisation for Human-Like Autotelic AI Nature Machine Intelligence 4 (12), 1068-1076 https://arxiv.org/abs/2206.01134

Carta, T., Romac, C., Wolf, T., Lamprier, S., Sigaud, O., & Oudeyer, P. Y. (2023). Grounding large language models in interactive environments with online reinforcement learning. ICML https://arxiv.org/abs/2302.02662

Colas, C., Teodorescu, L., Oudeyer, P. Y., Yuan, X., & Côté, M. A. (2023). Augmenting Autotelic Agents with Large Language Models. arXiv preprint arXiv:2305.12487. https://arxiv.org/abs/2305.12487

Séminaire DIC-ISC-CRIA – 14 mars 2024 par Andy LÜCKING

Andy LÜCKING - 14 mars 2024

TITRE: Gesture Semantics: Deictic Reference, Deferred Reference and Iconic Co-Speech

RÉSUMÉ:

Language use is situated in manifold ways, including the exploitation of the visual context and the use of manual gestures (multimodal communication). I will survey recent theoretical advances concerning the semantics and the semantic contribution of co-verbal deictic and iconic gestures. Multimodal communication challenges traditional notions of reference and meaning developed in formal semantics. Computationally tractable models of deictic and deferred reference and iconic gestures are proposed instead. These models specify language/perception interfaces for two concrete phenomena that are central to situated language. Inasmuch as LLMs lack perception and embodiment, these phenomena are currently, but not in principle, out of reach. I will conclude by pointing out *what* is needed for an LLM to be capable of deferred reference and iconic gestures.

BIOGRAPHIE:

Andy LÜCKING is Privatdozent at Goethe University Frankfurt. His work contributes to theoretical linguistics and computational semantics, in particular to a linguistic theory of human communication, that is, face-to-face interaction within and beyond single sentences. Besides publishing on deixis and iconicity in manual gesture, Andy is the main author of Referential Transparency Theory, the current semantic theory of plurality and quantification. His work on the perception of iconic gestures received an IEEE best paper award.

Andy Lücking, Alexander Henlein, and Alexander Mehler (2024). Iconic Gesture Semantics. In review. Manuscript available on request.

Andy Lücking and Jonathan Ginzburg (2023). Leading voices: Dialogue semantics, cognitive science, and the polyphonic structure of multimodal interaction. Language and Cognition, 15(1). 148–172.

Andy Lücking, Thies Pfeiffer and Hannes Rieser (2015). Pointing and Reference Reconsidered. In: Journal of Pragmatics 77: 56–79. DOI: 10.1016/j.pragma.2014.12.013.

Séminaire DIC-ISC-CRIA – 29 février 2024 par Alessandro LENCI

Alessandro LENCI – 29 février 2024

TITRE: The Grounding Problem in Language Models is not only about Grounding

RÉSUMÉ:

The Grounding Problem is typically assumed to concern the lack of referential competence of AI models. Language Models (LMs) that are trained only on texts without direct access to the external world are indeed rightly regarded to be affected by this limit, as they are ungrounded. On the other hand Multimodal LMs do have extralinguistic training data and show important abilities to link language with the visual world. In my talk, I will argue that incorporating multimodal data is a necessary but not sufficient condition to properly address the Grounding Problem. When applied to statistical models based on distributional co-occurrences like LMs, the Grounding Problem should be reformulated in a more extensive way, which sets an even higher challenge for current data-driven AI models.

BIOGRAPHIE:

Alessandro LENCI is Professor of linguistics and director of the Computational Linguistics Laboratory (CoLing Lab), University of Pisa. His main research interests are computational linguistics, natural language processing, semantics and cognitive science.

Lenci A., & Sahlgren (2023). Distributional Semantics, Cambridge, Cambridge University Press.

Lenci, A. (2018). Distributional models of word meaning. Annual review of Linguistics, 4, 151-171.

Lenci, A. (2023). Understanding Natural Language Understanding Systems. A Critical Analysis. Sistemi Intelligenti, arXiv preprint arXiv:2303.04229.

Lenci, A., & Padó, S. (2022). Perspectives for natural language processing between AI, linguistics and cognitive science. Frontiers in Artificial Intelligence, 5, 1059998.

Séminaire DIC-ISC-CRIA – 22 février 2024 par Gary LUPYAN

Gary LUPYAN – 22 février 2024

TITRE : What counts as understanding?

RÉSUMÉ:

The question of what it means to understand has taken on added urgency with the recent leaps in capabilities of generative AI such as large language models (LLMs). Can we really tell from observing the behavior of LLMs whether underlying the behavior is some notion of understanding? What kinds of successes are most indicative of understanding and what kinds of failures are most indicative of a failure to understand? If we applied the same standards to our own behavior, what might we conclude about the relationship between between understanding, knowing and doing?

BIOGRAPHIE:

Gary Lupyan: is Professor of Psychology at the University of Wisconsin-Madison. His work has focused on how natural language scaffolds and augments human cognition, and attempts to answer the question of what the human mind would be like without language. He also studies the evolution of language, and the ways that language adapts to the needs of its learners and users.

Liu, E., & Lupyan, G. (2023). Cross-domain semantic alignment: Concrete concepts are more abstract than you think. Philosophical Transactions of the Royal Society B. DOI: 10.1098/rstb.2021-0372 Duan, Y., & Lupyan, G. (2023). Divergence in Word Meanings and its Consequence for Communication. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 45, No. 45)

van Dijk, B. M. A., Kouwenhoven, T., Spruit, M. R., & van Duijn, M. J. (2023). Large Language Models: The Need for Nuance in Current Debates and a Pragmatic Perspective on Understanding (arXiv:2310.19671). arXiv.

Aguera y Arcas, B. (2022). Do large language models understand us? Medium. Titus, L. M. (2024). Does ChatGPT have semantic understanding? A problem with the statistics-of-occurrence strategy. Cognitive Systems Research, 83.

Pezzulo, G., Parr, T., Cisek, P., Clark, A., & Friston, K. (2024). Generating meaning: Active inference and the scope and limits of passive AI. Trends in Cognitive Sciences, 28(2), 97–112.

Séminaire DIC-ISC-CRIA – 1er février 2024 par Robert GOLDSTONE

Robert GOLDSTONE – 1er février 2024

TITRE : Learning Categories by Creating New Descriptions

RÉSUMÉ:

In Bongard problems, problem-solvers must come up with a rule for distinguishing visual scenes that fall into two categories. Only a handful of examples of each category are presented. This requires the open-ended creation of new descriptions. Physical Bongard Problems (PBPs) require perceiving and predicting the spatial dynamics of the scenes. We compare the performance of a new computational model (PATHS) to human performance. During continual perception of new scene descriptions over the course of category learning, hypotheses are constructed by combining descriptions into rules for distinguishing the categories. Spatially or temporally juxtaposing similar scenes promotes category learning when the scenes belong to different categories but hinders learning when the similar scenes belong to the same category.

BIOGRAPHIE:

Robert GOLDSTONE is a Distinguished Professor in the Department of Psychological and Brain Sciences and Program in Cognitive Science at Indiana University. His research interests include concept learning and representation, perceptual learning, educational applications of cognitive science, and collective behavior.

Goldstone, R. L., Dubova, M., Aiyappa, R., & Edinger, A. (2023). The spread of beliefs in partially modularized communities. Perspectives on Psychological Science, 0(0). https://doi.org/10.1177/17456916231198238

Goldstone, R. L., Andrade-Lotero, E., Hawkins, R. D., & Roberts, M. E. (2023). The emergence of specialized roles within groups. Topics in Cognitive Science, DOI: 10.1111/tops.12644.

Weitnauer, E., Goldstone, R. L., & Ritter, H. (2023). Perception and simulation during concept learning. Psychological Review, https://doi.org/10.1037/rev0000433.

Séminaire DIC-ISC-CRIA – 25 janvier 2024 par Stevan HARNAD

Stevan HARNAD - 25 janvier 2024

TITRE: Language Writ Large: LLMs, ChatGPT, Meaning and Understanding

RÉSUMÉ:

Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, etc.). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. It has even driven some of us to conclude that it actually understands. It’s not true that it understands. But it is also not true that we understand how it can do what it can do. I will suggest some hunches about benign “biases” -- convergent constraints that emerge at LLM-scale that may be helping ChatGPT do so much better than we would have expected. These biases are inherent in the nature of language itself, at LLM-scale, and they are closely linked to what it is that ChatGPT lacks, which is direct sensorimotor grounding to connect its words to their referents and its propositions to their meanings. These benign biases are related to (1) the parasitism of indirect verbal grounding on direct sensorimotor grounding, (2) the circularity of verbal definition, (3) the “mirroring” of language production and comprehension, (4) iconicity in propositions at LLM-scale, (5) computational counterparts of human “categorical perception” in category learning by neural nets, and perhaps also (6) a conjecture by Chomsky about the laws of thought.

BIOGRAPHIE:

Stevan HARNAD is Professor of psychology and cognitive science at UQÀM. His research is on category-learning, symbol-grounding, language-evolution, and Turing-Testing

Bonnasse-Gahot, L., & Nadal, J. P. (2022). Categorical perception: a groundwork for deep learning. Neural Computation, 34(2), 437-475.

Harnad, S. (2012). From sensorimotor categories and pantomime to grounded symbols and propositions In: Gibson, KR & Tallerman, M (eds.) The Oxford Handbook of Language Evolution 387-392.

Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence. In: Epstein, R, Roberts, Gary & Beber, G. (eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer, pp. 23-66.

Thériault, C., Pérez-Gay, F., Rivas, D., & Harnad, S. (2018). Learning-induced categorical perception in a neural network model. arXiv preprint arXiv:1805.04567.

Vincent‐Lamarre, P; Blondin-Massé, A; Lopes, M; Lord, M; Marcotte, O; & Harnad, S (2016). The latent structure of dictionaries. Topics in Cognitive Science 8(3): 625-659.

Pérez-Gay Juárez, F., Sicotte, T., Thériault, C., & Harnad, S. (2019). Category learning can alter perception and its neural correlates. PloS one, 14(12), e0226000.

Séminaire DIC-ISC-CRIA – 18 janvier 2024 par Ben GOERTZEL

Ben GOERTZEL - 18 janvier 2024

Titre: Toward AGI via Embodied Neural-Symbolic-Evolutionary Cognition

RÉSUMÉ:

A concrete path toward AGI with capability at the human level and beyond is outlined, centered on a common mathematical meta-representation capable of integrating neural, symbolic, evolutionary and autopoietic aspects of intelligence. The instantiation of these ideas in the OpenCog Hyperon software framework is discussed. An in-progress research programme is reviewed, in which this sort of integrative AGI system is induced to ground its natural language dialogue in its experience, via embodiment in physical robots and virtual-world avatars.

BIOGRAPHIE:

Ben Goertzel is a cross-disciplinary scientist, entrepreneur and author. He leads the SingularityNET Foundation, the OpenCog Foundation, and the AGI Society which runs the annual Artificial General Intelligence conference. His research work encompasses multiple areas including artificial general intelligence, natural language processing, cognitive science, machine learning, computational finance, bioinformatics, virtual worlds, gaming, parapsychology, theoretical physics and more. He has published 25+ scientific books, ~150 technical papers, and numerous journalistic articles, and given talks at a vast number of events of all sorts around the globe.

Goertzel, B. (2023). Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern LLMs. arXiv preprint arXiv:2309.10371.

Rodionov, S., Goertzel, Z. A., & Goertzel, B. (2023). An Evaluation of GPT-4 on the ETHICS Dataset. arXiv preprint arXiv:2309.10492.

Huang, K., Wang, Y., Goertzel, B., & Saliba, T. (2023). ChatGPT and Web3 Applications. In Beyond AI: ChatGPT, Web3, and the Business Landscape of Tomorrow (pp. 69-95). Cham: Springer Nature Switzerland.

Séminaire DIC-ISC-CRIA – 11 janvier 2024 par Raphaël MILLIÈRE

Raphaël Millière -11 janvier 2024

Titre: Mechanistic Explanation in Deep Learning

RÉSUMÉ:

Deep neural networks such as large language models (LLMs) have achieved impressive performance across almost every domain of natural language processing, but there remains substantial debate about which cognitive capabilities can be ascribed to these models. Drawing inspiration from mechanistic explanations in life sciences, the nascent field of "mechanistic interpretability" seeks to reverse-engineer human-interpretable features to explain how LLMs process information. This raises some questions: (1) Are causal claims about neural network components, based on coarse intervention methods (such as “activation patching”), genuine mechanistic explanations? (2) Does the focus on human-interpretable features risk imposing anthropomorphic assumptions? My answer will be "yes" to (1) and "no" to (2), closing with a discussion of some ongoing challenges.

BIOGRAPHIE:

Raphael Millière is Lecturer in Philosophy of Artificial Intelligence at Macquarie University in Sydney, Australia. His interests are in the philosophy of artificial intelligence, cognitive science, and mind, particularly in understanding artificial neural networks based on deep learning architectures such as Large Language Models. He has investigated syntactic knowledge, semantic competence, compositionality, variable binding, and grounding.

Elhage, N., et al. (2021). A mathematical framework for transformer circuits. Transformer Circuits Thread. Machamer, P., Darden, L., & Craver, C. F. (2000).

Thinking about Mechanisms. Philosophy of Science, 67(1), 1–25. Millière, R. (2023).

The Alignment Problem in Context. arXiv preprint arXiv:2311.02147. Mollo, D. C., & Millière, R. (2023).

The vector grounding problem. arXiv preprint arXiv:2304.01481. Yousefi, S., et al. (2023).

In-Context Learning in Large Language Models: A Neuroscience-inspired Analysis of Representations. arXiv preprint arXiv:2310.00313.

Suivez-nous