Search Results for author: Daniel D. Lee

Found 38 papers, 5 papers with code

Jointly learning visual motion and confidence from local patches in event cameras

no code implementations ECCV 2020 Daniel R. Kepple, Daewon Lee, Colin Prepsius, Volkan Isler, Il Memming Park, Daniel D. Lee

In the task of recovering pan-tilt ego velocities from events, we show that each individual confident local prediction of our network can be expected to be as accurate as state of the art optimization approaches which utilize the full image.

Motion Segmentation

Multi-Agent Curricula and Emergent Implicit Signaling

no code implementations21 Jun 2021 Niko A. Grupen, Daniel D. Lee, Bart Selman

We show that pursuers trained with our strategy exchange more than twice as much information (in bits) than baseline methods, indicating that our method has learned, and relies heavily on, the exchange of implicit signals.

Cooperative Multi-Agent Fairness and Equivariant Policies

no code implementations10 Jun 2021 Niko A. Grupen, Bart Selman, Daniel D. Lee

We study fairness through the lens of cooperative multi-agent learning.

Fairness

Local Disentanglement in Variational Auto-Encoders Using Jacobian $L_1$ Regularization

1 code implementation NeurIPS 2021 Travers Rhodes, Daniel D. Lee

There have been many recent advances in representation learning; however, unsupervised representation learning can still struggle with model identification issues related to rotations of the latent space.

Disentanglement

Learning Continuous Cost-to-Go Functions for Non-holonomic Systems

no code implementations20 Mar 2021 Jinwook Huh, Daniel D. Lee, Volkan Isler

In this work, we show that uniform sampling fails for non-holonomic systems.

Cost-to-Go Function Generating Networks for High Dimensional Motion Planning

no code implementations10 Dec 2020 Jinwook Huh, Volkan Isler, Daniel D. Lee

The c2g-HOF architecture consists of a cost-to-go function over the configuration space represented as a neural network (c2g-network) as well as a Higher Order Function (HOF) network which outputs the weights of the c2g-network for a given input workspace.

Motion Planning

Learning to Track Dynamic Targets in Partially Known Environments

1 code implementation17 Jun 2020 Heejin Jeong, Hamed Hassani, Manfred Morari, Daniel D. Lee, George J. Pappas

In particular, we introduce Active Tracking Target Network (ATTN), a unified RL policy that is capable of solving major sub-tasks of active target tracking -- in-sight tracking, navigation, and exploration.

Geodesic-HOF: 3D Reconstruction Without Cutting Corners

no code implementations14 Jun 2020 Ziyun Wang, Eric A. Mitchell, Volkan Isler, Daniel D. Lee

To address this issue, we propose learning an image-conditioned mapping function from a canonical sampling domain to a high dimensional space where the Euclidean distance is equal to the geodesic distance on the object.

3D Object Reconstruction 3D Reconstruction +1

Surface HOF: Surface Reconstruction from a Single Image Using Higher Order Function Networks

no code implementations18 Dec 2019 Ziyun Wang, Volkan Isler, Daniel D. Lee

Our approach is to learn a Higher Order Function (HOF) which takes an image of an object as input and generates a mapping function.

3D Reconstruction Image Reconstruction +1

Learning Q-network for Active Information Acquisition

2 code implementations23 Oct 2019 Heejin Jeong, Brent Schlotfeldt, Hamed Hassani, Manfred Morari, Daniel D. Lee, George J. Pappas

In this paper, we propose a novel Reinforcement Learning approach for solving the Active Information Acquisition problem, which requires an agent to choose a sequence of actions in order to acquire information about a process of interest using on-board sensors.

reinforcement-learning

Higher Order Function Networks for View Planning and Multi-View Reconstruction

no code implementations4 Oct 2019 Selim Engin, Eric Mitchell, Daewon Lee, Volkan Isler, Daniel D. Lee

In contrast to offline methods which require a 3D model of the object as input or online methods which rely on only local measurements, our method uses a neural network which encodes shape information for a large number of objects.

3D Reconstruction

Higher-Order Function Networks for Learning Composable 3D Object Representations

no code implementations ICLR 2020 Eric Mitchell, Selim Engin, Volkan Isler, Daniel D. Lee

We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second 'mapping' network.

Motion Planning

Probabilistically Safe Corridors to Guide Sampling-Based Motion Planning

no code implementations1 Jan 2019 Jinwook Huh, Omur Arslan, Daniel D. Lee

In this paper, we introduce a new probabilistically safe local steering primitive for sampling-based motion planning in complex high-dimensional configuration spaces.

Robotics

U-Net for MAV-based Penstock Inspection: an Investigation of Focal Loss in Multi-class Segmentation for Corrosion Identification

no code implementations18 Sep 2018 Ty Nguyen, Tolga Ozaslan, Ian D. Miller, James Keller, Giuseppe Loianno, Camillo J. Taylor, Daniel D. Lee, Vijay Kumar, Joseph H. Harwood, Jennifer Wozencraft

Periodical inspection and maintenance of critical infrastructure such as dams, penstocks, and locks are of significant importance to prevent catastrophic failures.

Learning Optimal Resource Allocations in Wireless Systems

no code implementations21 Jul 2018 Mark Eisen, Clark Zhang, Luiz. F. O. Chamon, Daniel D. Lee, Alejandro Ribeiro

This paper considers the design of optimal resource allocation policies in wireless communication systems which are generically modeled as a functional optimization problem with stochastic constraints.

Nearest neighbor density functional estimation from inverse Laplace transform

1 code implementation22 May 2018 J. Jon Ryu, Shouvik Ganguly, Young-Han Kim, Yung-Kyun Noh, Daniel D. Lee

A new approach to $L_2$-consistent estimation of a general density functional using $k$-nearest neighbor distances is proposed, where the functional under consideration is in the form of the expectation of some function $f$ of the densities at each point.

Scalable Centralized Deep Multi-Agent Reinforcement Learning via Policy Gradients

no code implementations22 May 2018 Arbaaz Khan, Clark Zhang, Daniel D. Lee, Vijay Kumar, Alejandro Ribeiro

When the number of agents increases, the dimensionality of the input and control spaces increase as well, and these methods do not scale well.

Distributed Optimization Multi-agent Reinforcement Learning +1

Assumed Density Filtering Q-learning

1 code implementation9 Dec 2017 Heejin Jeong, Clark Zhang, George J. Pappas, Daniel D. Lee

We formulate an efficient closed-form solution for the value update by approximately estimating analytic parameters of the posterior of the Q-beliefs.

Atari Games Bayesian Inference +1

Classification and Geometry of General Perceptual Manifolds

no code implementations17 Oct 2017 SueYeon Chung, Daniel D. Lee, Haim Sompolinsky

The effects of label sparsity on the classification capacity of manifolds are elucidated, revealing a scaling relation between label sparsity and manifold radius.

Classification General Classification +1

Memory Augmented Control Networks

no code implementations ICLR 2018 Arbaaz Khan, Clark Zhang, Nikolay Atanasov, Konstantinos Karydis, Vijay Kumar, Daniel D. Lee

The third part uses a network controller that learns to store those specific instances of past information that are necessary for planning.

Learning Data Manifolds with a Cutting Plane Method

no code implementations28 May 2017 SueYeon Chung, Uri Cohen, Haim Sompolinsky, Daniel D. Lee

We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom.

Data Augmentation

Neural Network Memory Architectures for Autonomous Robot Navigation

no code implementations23 May 2017 Steven W. Chen, Nikolay Atanasov, Arbaaz Khan, Konstantinos Karydis, Daniel D. Lee, Vijay Kumar

This work is a first thorough study of memory structures for deep-neural-network-based robot navigation, and offers novel tools to train such networks from supervision and quantify their ability to generalize to unseen scenarios.

Robot Navigation

Efficient Neural Codes under Metabolic Constraints

no code implementations NeurIPS 2016 Zhuo Wang, Xue-Xin Wei, Alan A. Stocker, Daniel D. Lee

The advantage could be as large as one-fold, substantially larger than the previous estimation.

Linear Readout of Object Manifolds

no code implementations6 Dec 2015 SueYeon Chung, Daniel D. Lee, Haim Sompolinsky

Objects are represented in sensory systems by continuous manifolds due to sensitivity of neuronal responses to changes in physical features such as location, orientation, and intensity.

Belief Flows of Robust Online Learning

no code implementations26 May 2015 Pedro A. Ortega, Koby Crammer, Daniel D. Lee

This paper introduces a new probabilistic model for online learning which dynamically incorporates information from stochastic gradients of an arbitrary loss function.

General Classification online learning

An Adversarial Interpretation of Information-Theoretic Bounded Rationality

no code implementations22 Apr 2014 Pedro A. Ortega, Daniel D. Lee

Here, we show that a single-agent free energy optimization is equivalent to a game between the agent and an imaginary adversary.

Decision Making

Optimal Neural Population Codes for High-dimensional Stimulus Variables

no code implementations NeurIPS 2013 Zhuo Wang, Alan A. Stocker, Daniel D. Lee

We consider solutions for a minimal case where the number of neurons in the population is equal to the number of stimulus dimensions (diffeomorphic).

Optimal Neural Tuning Curves for Arbitrary Stimulus Distributions: Discrimax, Infomax and Minimum L_p Loss

no code implementations NeurIPS 2012 Zhuo Wang, Alan A. Stocker, Daniel D. Lee

In this manner, we show how the optimal tuning curve depends upon the loss function, and the equivalence of maximizing mutual information with minimizing $L_p$ loss in the limit as $p$ goes to zero.

Diffusion Decision Making for Adaptive k-Nearest Neighbor Classification

no code implementations NeurIPS 2012 Yung-Kyun Noh, Frank Park, Daniel D. Lee

This paper sheds light on some fundamental connections of the diffusion decision making model of neuroscience and cognitive psychology with k-nearest neighbor classification.

Classification Decision Making +1

Learning via Gaussian Herding

no code implementations NeurIPS 2010 Koby Crammer, Daniel D. Lee

We introduce a new family of online learning algorithms based upon constraining the velocity flow over a distribution of weight vectors.

online learning

Extended Grassmann Kernels for Subspace-Based Learning

no code implementations NeurIPS 2008 Jihun Hamm, Daniel D. Lee

Subspace-based learning problems involve data whose elements are linear subspaces of a vector space.

General Classification

Algorithms for Non-negative Matrix Factorization

no code implementations NeurIPS 2000 Daniel D. Lee, H. Sebastian Seung

Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data.

A Neural Network Based Head Tracking System

no code implementations NeurIPS 1997 Daniel D. Lee, H. S. Seung

We have constructed an inexpensive video based motorized tracking system that learns to track a head.

Cannot find the paper you are looking for? You can Submit a new open access paper.