Search Results for author: Christopher Rozell

Found 13 papers, 5 papers with code

LINOCS: Lookahead Inference of Networked Operators for Continuous Stability

no code implementations28 Apr 2024 Noga Mudrik, Eva Yezerets, Yenho Chen, Christopher Rozell, Adam Charles

Such systems, often modeled as dynamical systems, typically exhibit noisy high-dimensional and non-stationary temporal behavior that renders their identification challenging.

Time Series

Distance preservation in state-space methods for detecting causal interactions in dynamical systems

no code implementations13 Aug 2023 Matthew O'Shaughnessy, Mark Davenport, Christopher Rozell

We analyze the popular ``state-space'' class of algorithms for detecting casual interaction in coupled dynamical systems.

Learning Internal Representations of 3D Transformations from 2D Projected Inputs

no code implementations31 Mar 2023 Marissa Connor, Bruno Olshausen, Christopher Rozell

When interacting in a three dimensional world, humans must estimate 3D structure from visual inputs projected down to two dimensional retinal images.

A Low-complexity Brain-computer Interface for High-complexity Robot Swarm Control

no code implementations27 May 2022 Gregory Canal, Yancy Diaz-Mercado, Magnus Egerstedt, Christopher Rozell

We construct a scalable dictionary of robotic behaviors that can be searched simply and efficiently by a BCI user, as we demonstrate through a large-scale user study testing the feasibility of our interaction algorithm, a user test of the full BCI system on (virtual and real) robot swarms, and simulations that verify our results against theoretical models.

Brain Computer Interface

Learning Identity-Preserving Transformations on Data Manifolds

1 code implementation22 Jun 2021 Marissa Connor, Kion Fallah, Christopher Rozell

However, these approaches are limited because they require transformation labels when training their models and they lack a method for determining which regions of the manifold are appropriate for applying each specific operator.

Feedback Coding for Active Learning

1 code implementation28 Feb 2021 Gregory Canal, Matthieu Bloch, Christopher Rozell

The iterative selection of examples for labeling in active machine learning is conceptually similar to feedback channel coding in information theory: in both tasks, the objective is to seek a minimal sequence of actions to encode information in the presence of noise.

Active Learning

Learning sparse codes from compressed representations with biologically plausible local wiring constraints

1 code implementation NeurIPS 2020 Kion Fallah, Adam Willats, Ninghao Liu, Christopher Rozell

Unfortunately, current proposals for sparse coding in the compressed space require a centralized compression process (i. e., dense random matrix) that is biologically unrealistic due to local wiring constraints observed in neural circuits.

Dimensionality Reduction

Generative causal explanations of black-box classifiers

2 code implementations NeurIPS 2020 Matthew O'Shaughnessy, Gregory Canal, Marissa Connor, Mark Davenport, Christopher Rozell

Our objective function encourages both the generative model to faithfully represent the data distribution and the latent factors to have a large causal influence on the classifier output.

Representing Closed Transformation Paths in Encoded Network Latent Space

no code implementations5 Dec 2019 Marissa Connor, Christopher Rozell

Deep generative networks have been widely used for learning mappings from a low-dimensional latent space to a high-dimensional data space.

Active Ordinal Querying for Tuplewise Similarity Learning

1 code implementation9 Oct 2019 Gregory Canal, Stefano Fenu, Christopher Rozell

Many machine learning tasks such as clustering, classification, and dataset search benefit from embedding data points in a space where distances reflect notions of relative similarity as perceived by humans.

Clustering

Transfer Learning on Manifolds via Learned Transport Operators

no code implementations ICLR 2018 Marissa Connor, Christopher Rozell

The main contribution of this paper is to define two transfer learning methods that use this generative manifold representation to learn natural transformations and incorporate them into new data.

Data Augmentation Few-Shot Image Classification +3

Distributed Sequence Memory of Multidimensional Inputs in Recurrent Networks

no code implementations26 May 2016 Adam Charles, Dong Yin, Christopher Rozell

In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs.

Dynamic Filtering of Time-Varying Sparse Signals via l1 Minimization

no code implementations22 Jul 2015 Adam Charles, Aurele Balavoine, Christopher Rozell

Taken together, the algorithms presented in this paper represent the first strong performance analysis of dynamic filtering algorithms for time-varying sparse signals as well as state-of-the-art performance in this emerging application.

Cannot find the paper you are looking for? You can Submit a new open access paper.