Search Results for author: T. Anderson Keller

Found 11 papers, 8 papers with code

Deep Generative Models of Music Expectation

no code implementations5 Oct 2023 Ninon Lizé Masclef, T. Anderson Keller

In prior work, this idea has been operationalized in the form of probabilistic models of music which allow for precise computation of song (or note-by-note) probabilities, conditioned on a 'training set' of prior musical or cultural experiences.

Flow Factorized Representation Learning

1 code implementation NeurIPS 2023 Yue Song, T. Anderson Keller, Nicu Sebe, Max Welling

A prominent goal of representation learning research is to achieve representations which are factorized in a useful manner with respect to the ground truth factors of variation.

Disentanglement

Traveling Waves Encode the Recent Past and Enhance Sequence Learning

1 code implementation3 Sep 2023 T. Anderson Keller, Lyle Muller, Terrence Sejnowski, Max Welling

Traveling waves of neural activity have been observed throughout the brain at a diversity of regions and scales; however, their precise computational role is still debated.

Sequential Image Classification

DUET: 2D Structured and Approximately Equivariant Representations

1 code implementation28 Jun 2023 Xavier Suau, Federico Danieli, T. Anderson Keller, Arno Blaas, Chen Huang, Jason Ramapuram, Dan Busbridge, Luca Zappella

We propose 2D strUctured and EquivarianT representations (coined DUET), which are 2d representations organized in a matrix structure, and equivariant with respect to transformations acting on the input data.

Self-Supervised Learning Transfer Learning

Latent Traversals in Generative Models as Potential Flows

1 code implementation25 Apr 2023 Yue Song, T. Anderson Keller, Nicu Sebe, Max Welling

In this work, we instead propose to model latent structures with a learned dynamic potential landscape, thereby performing latent traversals as the flow of samples down the landscape's gradient.

Disentanglement Inductive Bias

Homomorphic Self-Supervised Learning

no code implementations15 Nov 2022 T. Anderson Keller, Xavier Suau, Luca Zappella

In this work, we observe that many existing self-supervised learning algorithms can be both unified and generalized when seen through the lens of equivariant representations.

Self-Supervised Learning

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

1 code implementation NeurIPS Workshop SVRHM 2021 T. Anderson Keller, Qinghe Gao, Max Welling

Category-selectivity in the brain describes the observation that certain spatially localized areas of the cerebral cortex tend to respond robustly and selectively to stimuli from specific limited categories.

Topographic VAEs learn Equivariant Capsules

1 code implementation NeurIPS 2021 T. Anderson Keller, Max Welling

Finally, we demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.

As easy as APC: overcoming missing data and class imbalance in time series with self-supervised learning

1 code implementation29 Jun 2021 Fiorella Wever, T. Anderson Keller, Laura Symul, Victor Garcia

High levels of missing data and strong class imbalance are ubiquitous challenges that are often presented simultaneously in real-world time series data.

Self-Supervised Learning Time Series +2

Self Normalizing Flows

1 code implementation14 Nov 2020 T. Anderson Keller, Jorn W. T. Peters, Priyank Jaini, Emiel Hoogeboom, Patrick Forré, Max Welling

Efficient gradient computation of the Jacobian determinant term is a core problem in many machine learning settings, and especially so in the normalizing flow framework.

Fast Weight Long Short-Term Memory

no code implementations18 Apr 2018 T. Anderson Keller, Sharath Nittur Sridhar, Xin Wang

Associative memory using fast weights is a short-term memory mechanism that substantially improves the memory capacity and time scale of recurrent neural networks (RNNs).

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.