Search Results for author: Jack Merullo

Found 9 papers, 6 papers with code

Transformer Mechanisms Mimic Frontostriatal Gating Operations When Trained on Human Working Memory Tasks

no code implementations13 Feb 2024 Aaron Traylor, Jack Merullo, Michael J. Frank, Ellie Pavlick

Models based on the Transformer neural network architecture have seen success on a wide variety of tasks that appear to require complex "cognitive branching" -- or the ability to maintain pursuit of one goal while accomplishing others.

Characterizing Mechanisms for Factual Recall in Language Models

no code implementations24 Oct 2023 Qinan Yu, Jack Merullo, Ellie Pavlick

By scaling up or down the value vector of these heads, we can control the likelihood of using the in-context answer on new data.

counterfactual

Circuit Component Reuse Across Tasks in Transformer Language Models

no code implementations12 Oct 2023 Jack Merullo, Carsten Eickhoff, Ellie Pavlick

that it is mostly reused to solve a seemingly different task: Colored Objects (Ippolito & Callison-Burch, 2023).

Does CLIP Bind Concepts? Probing Compositionality in Large Image Models

1 code implementation20 Dec 2022 Martha Lewis, Nihal V. Nayak, Peilin Yu, Qinan Yu, Jack Merullo, Stephen H. Bach, Ellie Pavlick

In this work, we focus on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way (e. g., differentiating ''cube behind sphere'' from ''sphere behind cube'').

Language Modelling Open-Ended Question Answering

Linearly Mapping from Image to Text Space

1 code implementation30 Sep 2022 Jack Merullo, Louis Castricato, Carsten Eickhoff, Ellie Pavlick

Prior work has shown that pretrained LMs can be taught to caption images when a vision model's parameters are optimized to encode images in the language space.

Image Captioning Language Modelling +2

Pretraining on Interactions for Learning Grounded Affordance Representations

1 code implementation *SEM (NAACL) 2022 Jack Merullo, Dylan Ebert, Carsten Eickhoff, Ellie Pavlick

Lexical semantics and cognitive science point to affordances (i. e. the actions that objects support) as critical for understanding and representing nouns and verbs.

Grounded language learning

Investigating Sports Commentator Bias within a Large Corpus of American Football Broadcasts

1 code implementation IJCNLP 2019 Jack Merullo, Luke Yeh, Abram Handler, Alvin Grissom II, Brendan O'Connor, Mohit Iyyer

Sports broadcasters inject drama into play-by-play commentary by building team and player narratives through subjective analyses and anecdotes.

Cannot find the paper you are looking for? You can Submit a new open access paper.