Psych 711: Cognitive Science of Large Language Models. Fall, 2025
Course header

Course Description

Welcome to Psych 711: Cognitive Science of Large Language Models. We’ll be meeting on Wednesdays 9am-11:30am in Brogden 634.

Instructor: Prof. Gary Lupyan [email] [lab site] - Professor of Psychology at University of Wisconsin-Madison

The development of large language models (LLMs) — large artificial neural networks trained on large amounts of natural language — is arguably the biggest thing to happen to Cognitive Science in decades. Beyond whatever uses and misuses these models come to have in our society, their very existence acts as a stress test of many theories and frameworks that have been developed to explain the human mind.

For example, language was thought to be unlearnable from data. Strong embodiment theories have argued that that much of our semantic knowledge is represented in terms of bodily states. What does it mean then that LLMs do learn language from data and come to possess rich semantic knowledge with no body and lacking all direct sensory input?

We will also be touching on the long tradition of mechanistic interpretability. How useful are methods developed to study the human mind for studying LLMs? Can studying LLMs inspire new methods for studying human cognition?

Course Learning Outcomes

Students will…

  1. be familiarized with: the intellectual history that led to large language models, to their basic operating principles, and how they relate to classic connectionist ideas of distributed representations and error-driven and predictive learning.

  2. consider competing claims of what it means for a system to “understand”, and how behavioral evidence can be used to make sense of these competing claims.

  3. explore the consequences LLMs have for core issues in cognitive science including poverty of the stimulus, concept learning, the role of context, and “general” intelligence.

  4. complete a final project that stress-tests an existing cognitive theory/framework using existing data, or an empirical project with the option to use an LLM as a model organism to help understand how a specific cognitive problem may be solved.

Expectations

Students are expected to read the 2-5 readings assigned for each class. I expect students to critically engage with each reading. Be on the lookout for ideas that challenge your intuitions and that provoke you into thinking differently. To help you engage with the readings, each student will be randomly assigned to a 2-3 person group each week. After reading the papers individually, the group should get together to discuss the readings and – working together – fill out a Response Sheet (see Template). Each group should upload the completed template to Canvas by 8pm on Monday. I will grade each response sheet on a 1-5 scale and will use your responses to help organize the discussion for the following class.

Please note that some weeks have more/longer readings than other weeks. Plan your schedule accordingly. Final projects will be completed in groups of two. Each group has the option of choosing to write either a review/synthesis paper, or an empirical paper. Review/synthesis papers should involve selecting a cognitive theory/framework and stress testing it using previously published evidence involving LLMs. Empirical papers can involve collecting your own data (in the lab, online, by prompting an LLM, as appropriate), and/or using an LLM as a model organism to help understand how people may be solving some problem or, more abstractly, the computational principle by which a certain problem can be solved.