Search Results for author: Mikaela Fudolig

Found 1 papers, 0 papers with code

A blind spot for large language models: Supradiegetic linguistic information

no code implementations11 Jun 2023 Julia Witte Zimmerman, Denis Hudon, Kathryn Cramer, Jonathan St. Onge, Mikaela Fudolig, Milo Z. Trujillo, Christopher M. Danforth, Peter Sheridan Dodds

We propose that considering what it is like to be an LLM like ChatGPT, as Nagel might have put it, can help us gain insight into its capabilities in general, and in particular, that its exposure to linguistic training data can be productively reframed as exposure to the diegetic information encoded in language, and its deficits can be reframed as ignorance of extradiegetic information, including supradiegetic linguistic information.

Cannot find the paper you are looking for? You can Submit a new open access paper.