Search Results for author: Ila Fiete

Found 15 papers, 8 papers with code

Do Diffusion Models Learn Semantically Meaningful and Efficient Representations?

no code implementations5 Feb 2024 Qiyao Liang, Ziming Liu, Ila Fiete

Corresponding to each of these phases, we identify qualitatively different generation behaviors: 1) multiple bumps are generated, 2) one bump is generated but at inaccurate $x$ and $y$ locations, 3) a bump is generated at the correct $x$ and y location.

Image Generation

Neuro-Inspired Fragmentation and Recall to Overcome Catastrophic Forgetting in Curiosity

1 code implementation26 Oct 2023 Jaedong Hwang, Zhang-Wei Hong, Eric Chen, Akhilan Boopathy, Pulkit Agrawal, Ila Fiete

Deep reinforcement learning methods exhibit impressive performance on a range of tasks but still struggle on hard exploration tasks in large environments with sparse rewards.

Grid Cell-Inspired Fragmentation and Recall for Efficient Map Building

1 code implementation11 Jul 2023 Jaedong Hwang, Zhang-Wei Hong, Eric Chen, Akhilan Boopathy, Pulkit Agrawal, Ila Fiete

Agents build and use a local map to predict their observations; high surprisal leads to a "fragmentation event" that truncates the local map.

Clustering Navigate +1

Improving Protein Optimization with Smoothed Fitness Landscapes

1 code implementation2 Jul 2023 Andrew Kirjner, Jason Yim, Raman Samusevich, Shahar Bracha, Tommi Jaakkola, Regina Barzilay, Ila Fiete

The ability to engineer novel proteins with higher fitness for a desired property would be revolutionary for biotechnology and medicine.

Efficient Exploration

Beyond Geometry: Comparing the Temporal Structure of Computation in Neural Circuits with Dynamical Similarity Analysis

1 code implementation NeurIPS 2023 Mitchell Ostrow, Adam Eisen, Leo Kozachkov, Ila Fiete

To bridge this gap, we introduce a novel similarity metric that compares two systems at the level of their dynamics, called Dynamical Similarity Analysis (DSA).

Model-agnostic Measure of Generalization Difficulty

1 code implementation1 May 2023 Akhilan Boopathy, Kevin Liu, Jaedong Hwang, Shu Ge, Asaad Mohammedsaleh, Ila Fiete

The measure of a machine learning algorithm is the difficulty of the tasks it can perform, and sufficiently difficult tasks are critical drivers of strong machine learning models.

Inductive Bias Meta-Learning

See and Copy: Generation of complex compositional movements from modular and geometric RNN representations

no code implementations5 Oct 2022 Sunny Duan, Mikail Khona, Adrian Bertagnoli, Sarthak Chandra, Ila Fiete

A hallmark of biological intelligence and control is combinatorial generalization: animals are able to learn various things, then piece them together in new combinations to produce appropriate outputs for new tasks.

Winning the lottery with neural connectivity constraints: faster learning across cognitive tasks with spatially constrained sparse RNNs

no code implementations7 Jul 2022 Mikail Khona, Sarthak Chandra, Joy J. Ma, Ila Fiete

We study LM-RNNs in a multitask learning setting relevant to cognitive systems neuroscience with a commonly used set of tasks, 20-Cog-tasks [Yang et al., 2019].

How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective

1 code implementation15 Jun 2021 Akhilan Boopathy, Ila Fiete

Recent works have examined theoretical and empirical properties of wide neural networks trained in the Neural Tangent Kernel (NTK) regime.

Bipartite expander Hopfield networks as self-decoding high-capacity error correcting codes

no code implementations NeurIPS 2019 Rishidev Chaudhuri, Ila Fiete

Neural network models of memory and error correction famously include the Hopfield network, which can directly store---and error-correct through its dynamics---arbitrary N-bit patterns, but only for ~N such patterns.

Kernel RNN Learning (KeRNL)

no code implementations ICLR 2019 Christopher Roth, Ingmar Kanitscheider, Ila Fiete

We describe Kernel RNN Learning (KeRNL), a reduced-rank, temporal eligibility trace-based approximation to backpropagation through time (BPTT) for training recurrent neural networks (RNNs) that gives competitive performance to BPTT on long time-dependence tasks.

Associative content-addressable networks with exponentially many robust stable states

no code implementations6 Apr 2017 Rishidev Chaudhuri, Ila Fiete

The brain must robustly store a large number of memories, corresponding to the many events encountered over a lifetime.

Training recurrent networks to generate hypotheses about how the brain solves hard navigation problems

1 code implementation NeurIPS 2017 Ingmar Kanitscheider, Ila Fiete

Self-localization during navigation with noisy sensors in an ambiguous world is computationally challenging, yet animals and humans excel at it.

Neurons and Cognition

Cannot find the paper you are looking for? You can Submit a new open access paper.