Search Results for author: David Sussillo

Found 22 papers, 7 papers with code

Are task representations gated in macaque prefrontal cortex?

no code implementations29 Jun 2023 Timo Flesch, Valerio Mante, William Newsome, Andrew Saxe, Christopher Summerfield, David Sussillo

A recent paper (Flesch et al, 2022) describes behavioural and neural data suggesting that task representations are gated in the prefrontal cortex in both humans and macaques.

Analyzing Populations of Neural Networks via Dynamical Model Embedding

no code implementations27 Feb 2023 Jordan Cotler, Kai Sheng Tai, Felipe Hernández, Blake Elias, David Sussillo

The specific model to be emulated is determined by a model embedding vector that the meta-model takes as input; these model embedding vectors constitute a manifold corresponding to the given population of models.

Reverse engineering recurrent neural networks with Jacobian switching linear dynamical systems

1 code implementation NeurIPS 2021 Jimmy T. H. Smith, Scott W. Linderman, David Sussillo

The results are a trained SLDS variant that closely approximates the RNN, an auxiliary function that can produce a fixed point for each point in state-space, and a trained nonlinear RNN whose dynamics have been regularized such that its first-order terms perform the computation, if possible.

Time Series Analysis

The geometry of integration in text classification RNNs

1 code implementation ICLR 2021 Kyle Aitken, Vinay V. Ramasesh, Ankush Garg, Yuan Cao, David Sussillo, Niru Maheswaranathan

Using tools from dynamical systems analysis, we study recurrent networks trained on a battery of both natural and synthetic text classification tasks.

General Classification text-classification +1

How recurrent networks implement contextual processing in sentiment analysis

1 code implementation ICML 2020 Niru Maheswaranathan, David Sussillo

Here, we propose general methods for reverse engineering recurrent neural networks (RNNs) to identify and elucidate contextual processing.

Negation Sentiment Analysis +1

Universality and individuality in neural dynamics across large populations of recurrent networks

no code implementations NeurIPS 2019 Niru Maheswaranathan, Alex H. Williams, Matthew D. Golub, Surya Ganguli, David Sussillo

To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics.

Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics

no code implementations NeurIPS 2019 Niru Maheswaranathan, Alex Williams, Matthew D. Golub, Surya Ganguli, David Sussillo

In this work, we use tools from dynamical systems analysis to reverse engineer recurrent networks trained to perform sentiment classification, a foundational natural language processing task.

General Classification Sentiment Analysis +1

Hallucinations in Neural Machine Translation

no code implementations27 Sep 2018 Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, David Sussillo

Neural machine translation (NMT) systems have reached state of the art performance in translating text and are in wide deployment.

Data Augmentation Hallucination +3

Task-Driven Convolutional Recurrent Models of the Visual System

1 code implementation NeurIPS 2018 Aran Nayebi, Daniel Bear, Jonas Kubilius, Kohitij Kar, Surya Ganguli, David Sussillo, James J. DiCarlo, Daniel L. K. Yamins

Feed-forward convolutional neural networks (CNNs) are currently state-of-the-art for object classification tasks such as ImageNet.

General Classification Object Recognition

A Dataset and Architecture for Visual Reasoning with a Working Memory

2 code implementations ECCV 2018 Guangyu Robert Yang, Igor Ganichev, Xiao-Jing Wang, Jonathon Shlens, David Sussillo

COG is much simpler than the general problem of video analysis, yet it addresses many of the problems relating to visual and logical reasoning and memory -- problems that remain challenging for modern deep learning architectures.

Logical Reasoning Visual Question Answering (VQA) +1

Recurrent Segmentation for Variable Computational Budgets

no code implementations28 Nov 2017 Lane McIntosh, Niru Maheswaranathan, David Sussillo, Jonathon Shlens

Importantly, the RNN may be deployed across a range of computational budgets by merely running the model for a variable number of iterations.

Image Segmentation Segmentation +3

An Online Sequence-to-Sequence Model Using Partial Conditioning

1 code implementation NeurIPS 2016 Navdeep Jaitly, Quoc V. Le, Oriol Vinyals, Ilya Sutskever, David Sussillo, Samy Bengio

However, they are unsuitable for tasks that require incremental predictions to be made as more data arrives or tasks that have long input sequences and output sequences.

Capacity and Trainability in Recurrent Neural Networks

1 code implementation29 Nov 2016 Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo

They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter.

Making brain-machine interfaces robust to future neural variability

no code implementations19 Oct 2016 David Sussillo, Sergey D. Stavisky, Jonathan C. Kao, Stephen I. Ryu, Krishna V. Shenoy

A major hurdle to clinical translation of brain-machine interfaces (BMIs) is that current decoders, which are trained from a small quantity of recent data, become ineffective when neural recording conditions subsequently change.

Translation

LFADS - Latent Factor Analysis via Dynamical Systems

no code implementations22 Aug 2016 David Sussillo, Rafal Jozefowicz, L. F. Abbott, Chethan Pandarinath

Neuroscience is experiencing a data revolution in which many hundreds or thousands of neurons are recorded simultaneously.

A Neural Transducer

no code implementations16 Nov 2015 Navdeep Jaitly, David Sussillo, Quoc V. Le, Oriol Vinyals, Ilya Sutskever, Samy Bengio

However, they are unsuitable for tasks that require incremental predictions to be made as more data arrives or tasks that have long input sequences and output sequences.

Random Walk Initialization for Training Very Deep Feedforward Networks

no code implementations19 Dec 2014 David Sussillo, L. F. Abbott

We show that the successive application of correctly scaled random matrices to an initial vector results in a random walk of the log of the norm of the resulting vectors, and we compute the scaling that makes this walk unbiased.

Cannot find the paper you are looking for? You can Submit a new open access paper.