Séminaire DIC-ISC-CRIA – 28 septembre 2023 par Dave CHALMERS

Dave CHALMERS – 28 septembre 2023

Titre : From the History of Philosophy to AI: Does Thinking Require Sensing?

Résumé :

There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will discuss the underlying issue and will break down the strongest reasons for and against. I suggest that given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that extensions and successors to large language models may be conscious in the not-too-distant future.

Biographie :

David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is the author of The Conscious Mind (1996), Constructing The World (2010), and Reality+: Virtual Worlds and the Problems of Philosophy (2022). He is known for formulating the “hard problem” of consciousness, and (with Andy Clark) for the idea of the “extended mind,” according to which the tools we use can become parts of our minds.

Références

David Chalmers (@davidchalmers42) / XChalmers, D. J. (2023). Could a large language model be conscious?. arXiv preprint arXiv:2303.07103.

Chalmers, D.J. (2022) Reality+: Virtual worlds and the problems of philosophy. Penguin

Chalmers, D. J. (1995). Facing up to the problem of consciousnessJournal of Consciousness Studies2(3), 200-219.

Clark, A., & Chalmers, D. (1998). The extended mindAnalysis58(1), 7-19.

Suivez-nous