no code implementations • 28 May 2024 • Suraj Anand, Michael A. Lepori, Jack Merullo, Ellie Pavlick
Hence, we study $\textbf{structural in-context learning}$, which we define as the ability of a model to execute in-context learning on arbitrary tokens -- so called because the model must generalize on the basis of e. g. sentence structure or task structure, rather than semantic content encoded in token embeddings.
no code implementations • 28 May 2024 • Suraj Anand, David Getzen
Numerous algorithms have been proposed to $\textit{align}$ language models to remove undesirable behaviors.
no code implementations • 12 Feb 2024 • Louis Castricato, Nathan Lile, Suraj Anand, Hailey Schoelkopf, Siddharth Verma, Stella Biderman
Existing methods for controlling language models, such as RLHF and Constitutional AI, involve determining which LLM behaviors are desirable and training them into a language model.