ChatGPT/LLM Discussion Resources

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmar- garet Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Conference on Fairness, Accountability, and Trans- parency (FAccT ’21), March 3–10, 2021, Virtual Event, Canada. ACM, New York, NY, USA, 14 pages.

Full text. This is required reading in my upper-level courses and might become required reading for all of my courses. It is an incisive, careful deconstruction of myths and hype surrounding large language models written by premiere experts in the field as early as 2021. It is highly prescient.

Erz, Hendrik (2022). “I get your excitement about ChatGPT, but …”., 7 Dec 2022,

A very clear, tech-leaning layman’s read that concisely deconstructs some of the big problems with ChatGPT in its current form. It does hinge a little heavily on the idea that ChatGPT is “not a breakthrough”, which may leave the reader feeling like all the addressed problems would be solved if it were a “breakthrough”, but still a worthwhile read.

Chirag Shah and Emily M. Bender. 2022. Situating Search. In Proceedings of the 2022 ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR ’22), March 14–18, 2022, Regensburg, Germany. ACM, New York, NY, USA, 12 pages.

Full text. Addresses the question, “Isn’t it okay as a method of information retrieval that is easier to use than a search engine?” with a resounding “no, and a search engine is bad, too.” A framework-shifting kind of read; recommended as another angle on the limited scope of LLMs even in the areas their designers often purport them to be useful.