Search Results for author: Michael A. Lepori

Found 9 papers, 7 papers with code

Uncovering Intermediate Variables in Transformers using Circuit Probing

1 code implementation7 Nov 2023 Michael A. Lepori, Thomas Serre, Ellie Pavlick

We apply this method to models trained on simple arithmetic tasks, demonstrating its effectiveness at (1) deciphering the algorithms that models have learned, (2) revealing modular structure within a model, and (3) tracking the development of circuits over training.

Language Modelling Sentence

Instilling Inductive Biases with Subnetworks

1 code implementation17 Oct 2023 Enyan Zhang, Michael A. Lepori, Ellie Pavlick

Our method discovers a functional subnetwork that implements a particular subtask within a trained model and uses it to instill inductive biases towards solutions utilizing that subtask.

Image Classification

Deep Neural Networks Can Learn Generalizable Same-Different Visual Relations

no code implementations14 Oct 2023 Alexa R. Tartaglini, Sheridan Feucht, Michael A. Lepori, Wai Keen Vong, Charles Lovering, Brenden M. Lake, Ellie Pavlick

Much of this prior work focuses on training convolutional neural networks to classify images of two same or two different abstract shapes, testing generalization on within-distribution stimuli.

Object Recognition Out-of-Distribution Generalization

NeuroSurgeon: A Toolkit for Subnetwork Analysis

1 code implementation1 Sep 2023 Michael A. Lepori, Ellie Pavlick, Thomas Serre

Despite recent advances in the field of explainability, much remains unknown about the algorithms that neural networks learn to represent.

Break It Down: Evidence for Structural Compositionality in Neural Networks

1 code implementation NeurIPS 2023 Michael A. Lepori, Thomas Serre, Ellie Pavlick

Though modern neural networks have achieved impressive performance in both vision and language tasks, we know little about the functions that they implement.

Unequal Representations: Analyzing Intersectional Biases in Word Embeddings Using Representational Similarity Analysis

1 code implementation24 Nov 2020 Michael A. Lepori

We present a new approach for detecting human-like social biases in word embeddings using representational similarity analysis.

Word Embeddings

Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs

1 code implementation ACL 2020 Michael A. Lepori, Tal Linzen, R. Thomas McCoy

Sequence-based neural networks show significant sensitivity to syntactic structure, but they still perform less well on syntactic tasks than tree-based networks.

Data Augmentation

Can you hear me $\textit{now}$? Sensitive comparisons of human and machine perception

no code implementations27 Mar 2020 Michael A. Lepori, Chaz Firestone

The rise of machine-learning systems that process sensory input has brought with it a rise in comparisons between human and machine perception.

Math speech-recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.