no code implementations • ECCV 2020 • Daniel R. Kepple, Daewon Lee, Colin Prepsius, Volkan Isler, Il Memming Park, Daniel D. Lee
In the task of recovering pan-tilt ego velocities from events, we show that each individual confident local prediction of our network can be expected to be as accurate as state of the art optimization approaches which utilize the full image.
no code implementations • 20 Feb 2025 • Chanwoo Chun, Abdulkadir Canatar, SueYeon Chung, Daniel D. Lee
In both artificial and biological systems, the centered kernel alignment (CKA) has become a widely used tool for quantifying neural representation similarity.
no code implementations • 23 Oct 2024 • Chanwoo Chun, SueYeon Chung, Daniel D. Lee
Analyzing the structure of sampled features from an input data distribution is challenging when constrained by limited measurements in both the number of inputs and features.
no code implementations • 22 Jan 2024 • Patrick Cook, Danny Jammooa, Morten Hjorth-Jensen, Daniel D. Lee, Dean Lee
We present a general class of machine learning algorithms called parametric matrix models.
Ranked #8 on
Image Classification
on EMNIST-Balanced
1 code implementation • 24 Oct 2023 • Travers Rhodes, Daniel D. Lee
Human demonstrations of trajectories are an important source of training data for many machine learning problems.
no code implementations • 17 May 2023 • Chanwoo Chun, Daniel D. Lee
We investigate how sparse neural activity affects the generalization performance of a deep Bayesian neural network at the large width limit.
no code implementations • 14 Apr 2023 • ZiYun Wang, Fernando Cladera Ojeda, Anthony Bisulco, Daewon Lee, Camillo J. Taylor, Kostas Daniilidis, M. Ani Hsieh, Daniel D. Lee, Volkan Isler
Event-based sensors have recently drawn increasing interest in robotic perception due to their lower latency, higher dynamic range, and lower bandwidth requirements compared to standard CMOS-based imagers.
no code implementations • 27 Jan 2023 • Niko A. Grupen, Michael Hanlon, Alexis Hao, Daniel D. Lee, Bart Selman
Large-scale AI systems that combine search and learning have reached super-human levels of performance in game-playing, but have also been shown to fail in surprising ways.
no code implementations • 21 Jun 2021 • Niko A. Grupen, Daniel D. Lee, Bart Selman
We show that pursuers trained with our strategy exchange more than twice as much information (in bits) than baseline methods, indicating that our method has learned, and relies heavily on, the exchange of implicit signals.
no code implementations • 10 Jun 2021 • Niko A. Grupen, Bart Selman, Daniel D. Lee
We study fairness through the lens of cooperative multi-agent learning.
1 code implementation • NeurIPS 2021 • Travers Rhodes, Daniel D. Lee
There have been many recent advances in representation learning; however, unsupervised representation learning can still struggle with model identification issues related to rotations of the latent space.
no code implementations • 20 Mar 2021 • Jinwook Huh, Daniel D. Lee, Volkan Isler
In this work, we show that uniform sampling fails for non-holonomic systems.
no code implementations • 10 Dec 2020 • Jinwook Huh, Volkan Isler, Daniel D. Lee
The c2g-HOF architecture consists of a cost-to-go function over the configuration space represented as a neural network (c2g-network) as well as a Higher Order Function (HOF) network which outputs the weights of the c2g-network for a given input workspace.
no code implementations • 30 Nov 2020 • Niko A. Grupen, Daniel D. Lee, Bart Selman
In this work, we study emergent communication through the lens of cooperative multi-agent behavior in nature.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
no code implementations • 18 Nov 2020 • Anthony Bisulco, Fernando Cladera Ojeda, Volkan Isler, Daniel D. Lee
This paper presents a Dynamic Vision Sensor (DVS) based system for reasoning about high speed motion.
1 code implementation • 17 Jun 2020 • Heejin Jeong, Hamed Hassani, Manfred Morari, Daniel D. Lee, George J. Pappas
In particular, we introduce Active Tracking Target Network (ATTN), a unified RL policy that is capable of solving major sub-tasks of active target tracking -- in-sight tracking, navigation, and exploration.
no code implementations • 14 Jun 2020 • Ziyun Wang, Eric A. Mitchell, Volkan Isler, Daniel D. Lee
To address this issue, we propose learning an image-conditioned mapping function from a canonical sampling domain to a high dimensional space where the Euclidean distance is equal to the geodesic distance on the object.
no code implementations • 3 Apr 2020 • Anthony Bisulco, Fernando Cladera Ojeda, Volkan Isler, Daniel D. Lee
This paper presents a novel end-to-end system for pedestrian detection using Dynamic Vision Sensors (DVSs).
no code implementations • 18 Dec 2019 • Ziyun Wang, Volkan Isler, Daniel D. Lee
Our approach is to learn a Higher Order Function (HOF) which takes an image of an object as input and generates a mapping function.
2 code implementations • 23 Oct 2019 • Heejin Jeong, Brent Schlotfeldt, Hamed Hassani, Manfred Morari, Daniel D. Lee, George J. Pappas
In this paper, we propose a novel Reinforcement Learning approach for solving the Active Information Acquisition problem, which requires an agent to choose a sequence of actions in order to acquire information about a process of interest using on-board sensors.
no code implementations • 4 Oct 2019 • Selim Engin, Eric Mitchell, Daewon Lee, Volkan Isler, Daniel D. Lee
In contrast to offline methods which require a 3D model of the object as input or online methods which rely on only local measurements, our method uses a neural network which encodes shape information for a large number of objects.
no code implementations • ICLR 2020 • Eric Mitchell, Selim Engin, Volkan Isler, Daniel D. Lee
We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second 'mapping' network.
no code implementations • 1 Jan 2019 • Jinwook Huh, Omur Arslan, Daniel D. Lee
In this paper, we introduce a new probabilistically safe local steering primitive for sampling-based motion planning in complex high-dimensional configuration spaces.
Robotics
no code implementations • 18 Sep 2018 • Ty Nguyen, Tolga Ozaslan, Ian D. Miller, James Keller, Giuseppe Loianno, Camillo J. Taylor, Daniel D. Lee, Vijay Kumar, Joseph H. Harwood, Jennifer Wozencraft
Periodical inspection and maintenance of critical infrastructure such as dams, penstocks, and locks are of significant importance to prevent catastrophic failures.
no code implementations • 21 Jul 2018 • Mark Eisen, Clark Zhang, Luiz. F. O. Chamon, Daniel D. Lee, Alejandro Ribeiro
This paper considers the design of optimal resource allocation policies in wireless communication systems which are generically modeled as a functional optimization problem with stochastic constraints.
no code implementations • 22 May 2018 • Arbaaz Khan, Clark Zhang, Daniel D. Lee, Vijay Kumar, Alejandro Ribeiro
When the number of agents increases, the dimensionality of the input and control spaces increase as well, and these methods do not scale well.
1 code implementation • 22 May 2018 • J. Jon Ryu, Shouvik Ganguly, Young-Han Kim, Yung-Kyun Noh, Daniel D. Lee
A new approach to $L_2$-consistent estimation of a general density functional using $k$-nearest neighbor distances is proposed, where the functional under consideration is in the form of the expectation of some function $f$ of the densities at each point.
1 code implementation • 9 Dec 2017 • Heejin Jeong, Clark Zhang, George J. Pappas, Daniel D. Lee
We formulate an efficient closed-form solution for the value update by approximately estimating analytic parameters of the posterior of the Q-beliefs.
no code implementations • NeurIPS 2017 • Yung-Kyun Noh, Masashi Sugiyama, Kee-Eung Kim, Frank Park, Daniel D. Lee
This paper shows how metric learning can be used with Nadaraya-Watson (NW) kernel regression.
no code implementations • 17 Oct 2017 • SueYeon Chung, Daniel D. Lee, Haim Sompolinsky
The effects of label sparsity on the classification capacity of manifolds are elucidated, revealing a scaling relation between label sparsity and manifold radius.
no code implementations • ICLR 2018 • Arbaaz Khan, Clark Zhang, Nikolay Atanasov, Konstantinos Karydis, Vijay Kumar, Daniel D. Lee
The third part uses a network controller that learns to store those specific instances of past information that are necessary for planning.
no code implementations • 28 May 2017 • SueYeon Chung, Uri Cohen, Haim Sompolinsky, Daniel D. Lee
We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom.
no code implementations • 23 May 2017 • Steven W. Chen, Nikolay Atanasov, Arbaaz Khan, Konstantinos Karydis, Daniel D. Lee, Vijay Kumar
This work is a first thorough study of memory structures for deep-neural-network-based robot navigation, and offers novel tools to train such networks from supervision and quantify their ability to generalize to unseen scenarios.
no code implementations • NeurIPS 2016 • Zhuo Wang, Xue-Xin Wei, Alan A. Stocker, Daniel D. Lee
The advantage could be as large as one-fold, substantially larger than the previous estimation.
no code implementations • 6 Dec 2015 • SueYeon Chung, Daniel D. Lee, Haim Sompolinsky
Objects are represented in sensory systems by continuous manifolds due to sensitivity of neuronal responses to changes in physical features such as location, orientation, and intensity.
no code implementations • 26 May 2015 • Pedro A. Ortega, Koby Crammer, Daniel D. Lee
This paper introduces a new probabilistic model for online learning which dynamically incorporates information from stochastic gradients of an arbitrary loss function.
no code implementations • 22 Apr 2014 • Pedro A. Ortega, Daniel D. Lee
Here, we show that a single-agent free energy optimization is equivalent to a game between the agent and an imaginary adversary.
no code implementations • NeurIPS 2013 • Zhuo Wang, Alan A. Stocker, Daniel D. Lee
We consider solutions for a minimal case where the number of neurons in the population is equal to the number of stimulus dimensions (diffeomorphic).
no code implementations • NeurIPS 2012 • Zhuo Wang, Alan A. Stocker, Daniel D. Lee
In this manner, we show how the optimal tuning curve depends upon the loss function, and the equivalence of maximizing mutual information with minimizing $L_p$ loss in the limit as $p$ goes to zero.
no code implementations • NeurIPS 2012 • Yung-Kyun Noh, Frank Park, Daniel D. Lee
This paper sheds light on some fundamental connections of the diffusion decision making model of neuroscience and cognitive psychology with k-nearest neighbor classification.
no code implementations • NeurIPS 2010 • Yung-Kyun Noh, Byoung-Tak Zhang, Daniel D. Lee
We consider the problem of learning a local metric to enhance the performance of nearest neighbor classification.
no code implementations • NeurIPS 2010 • Koby Crammer, Daniel D. Lee
We introduce a new family of online learning algorithms based upon constraining the velocity flow over a distribution of weight vectors.
no code implementations • NeurIPS 2008 • Jihun Hamm, Daniel D. Lee
Subspace-based learning problems involve data whose elements are linear subspaces of a vector space.
no code implementations • NeurIPS 2000 • Daniel D. Lee, H. Sebastian Seung
Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data.
no code implementations • NeurIPS 1997 • Daniel D. Lee, H. S. Seung
We have constructed an inexpensive video based motorized tracking system that learns to track a head.