Search Results for author: Animashree Anandkumar

Found 59 papers, 18 papers with code

FocalFormer3D : Focusing on Hard Instance for 3D Object Detection

1 code implementation8 Aug 2023 Yilun Chen, Zhiding Yu, Yukang Chen, Shiyi Lan, Animashree Anandkumar, Jiaya Jia, Jose Alvarez

For 3D object detection, we instantiate this method as FocalFormer3D, a simple yet effective detector that excels at excavating difficult objects and improving prediction recall.

3D Object Detection Autonomous Driving +2

Differentially Private Video Activity Recognition

no code implementations27 Jun 2023 Zelun Luo, Yuliang Zou, Yijin Yang, Zane Durante, De-An Huang, Zhiding Yu, Chaowei Xiao, Li Fei-Fei, Animashree Anandkumar

In recent years, differential privacy has seen significant advancements in image classification; however, its application to video activity recognition remains under-explored.

Activity Recognition Classification +2

PeRFception: Perception using Radiance Fields

1 code implementation24 Aug 2022 Yoonwoo Jeong, Seungjoo Shin, Junha Lee, Christopher Choy, Animashree Anandkumar, Minsu Cho, Jaesik Park

The recent progress in implicit 3D representation, i. e., Neural Radiance Fields (NeRFs), has made accurate and photorealistic 3D reconstruction possible in a differentiable manner.

3D Reconstruction Segmentation

Neural Scene Representation for Locomotion on Structured Terrain

no code implementations16 Jun 2022 David Hoeller, Nikita Rudin, Christopher Choy, Animashree Anandkumar, Marco Hutter

We propose a learning-based method to reconstruct the local terrain for locomotion with a mobile robot traversing urban environments.

3D Reconstruction

Quantification of Robotic Surgeries with Vision-Based Deep Learning

no code implementations6 May 2022 Dani Kiyasseh, Runzhuo Ma, Taseen F. Haque, Jessica Nguyen, Christian Wagner, Animashree Anandkumar, Andrew J. Hung

We believe this is a prerequisite for the provision of surgical feedback and modulation of surgeon performance in pursuit of improved patient outcomes.

Navigate Skills Assessment +1

Reinforcement Learning in Factored Action Spaces using Tensor Decompositions

no code implementations27 Oct 2021 Anuj Mahajan, Mikayel Samvelyan, Lei Mao, Viktor Makoviychuk, Animesh Garg, Jean Kossaifi, Shimon Whiteson, Yuke Zhu, Animashree Anandkumar

We present an extended abstract for the previously published work TESSERACT [Mahajan et al., 2021], which proposes a novel solution for Reinforcement Learning (RL) in large, factored action spaces using tensor decompositions.

Multi-agent Reinforcement Learning reinforcement-learning +1

Self-Calibrating Neural Radiance Fields

1 code implementation ICCV 2021 Yoonwoo Jeong, Seokjun Ahn, Christopher Choy, Animashree Anandkumar, Minsu Cho, Jaesik Park

We also propose a new geometric loss function, viz., projected ray distance loss, to incorporate geometric consistency for complex non-linear camera models.

Tesseract: Tensorised Actors for Multi-Agent Reinforcement Learning

no code implementations31 May 2021 Anuj Mahajan, Mikayel Samvelyan, Lei Mao, Viktor Makoviychuk, Animesh Garg, Jean Kossaifi, Shimon Whiteson, Yuke Zhu, Animashree Anandkumar

Algorithms derived from Tesseract decompose the Q-tensor across agents and utilise low-rank tensor approximations to model agent interactions relevant to the task.

Learning Theory Multi-agent Reinforcement Learning +3

Coach-Player Multi-Agent Reinforcement Learning for Dynamic Team Composition

1 code implementation18 May 2021 Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu, Animashree Anandkumar

Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players.

Multi-agent Reinforcement Learning reinforcement-learning +3

Emergent Hand Morphology and Control from Optimizing Robust Grasps of Diverse Objects

no code implementations22 Dec 2020 Xinlei Pan, Animesh Garg, Animashree Anandkumar, Yuke Zhu

Through experimentation and comparative study, we demonstrate the effectiveness of our approach in discovering robust and cost-efficient hand morphologies for grasping novel objects.

Bayesian Optimization MORPH

Fast Uncertainty Quantification for Deep Object Pose Estimation

no code implementations16 Nov 2020 Guanya Shi, Yifeng Zhu, Jonathan Tremblay, Stan Birchfield, Fabio Ramos, Animashree Anandkumar, Yuke Zhu

Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with sim2real transfer.

Object Pose Estimation +1

Multi-task learning for electronic structure to predict and explore molecular potential energy surfaces

no code implementations5 Nov 2020 Zhuoran Qiao, Feizhi Ding, Matthew Welborn, Peter J. Bygrave, Daniel G. A. Smith, Animashree Anandkumar, Frederick R. Manby, Thomas F. Miller III

We refine the OrbNet model to accurately predict energy, forces, and other response properties for molecules using a graph neural-network architecture based on features from low-cost approximated quantum operators in the symmetry-adapted atomic orbital basis.

Multi-Task Learning

Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion

no code implementations21 Sep 2020 Xingye Da, Zhaoming Xie, David Hoeller, Byron Boots, Animashree Anandkumar, Yuke Zhu, Buck Babich, Animesh Garg

We present a hierarchical framework that combines model-based control and reinforcement learning (RL) to synthesize robust controllers for a quadruped (the Unitree Laikago).

reinforcement-learning Reinforcement Learning (RL)

Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery

no code implementations26 Aug 2020 Francisco Luongo, Ryan Hakim, Jessica H. Nguyen, Animashree Anandkumar, Andrew J Hung

For both gesture identification and classification datasets, we observed no effect of recurrent classification model choice (LSTM vs. convLSTM) on performance.

Classification General Classification +1

Active Learning under Label Shift

no code implementations16 Jul 2020 Eric Zhao, Anqi Liu, Animashree Anandkumar, Yisong Yue

We address the problem of active learning under label shift: when the class proportions of source and target domains differ.

Active Learning

OrbNet: Deep Learning for Quantum Chemistry Using Symmetry-Adapted Atomic-Orbital Features

no code implementations15 Jul 2020 Zhuoran Qiao, Matthew Welborn, Animashree Anandkumar, Frederick R. Manby, Thomas F. Miller III

We introduce a machine learning method in which energy solutions from the Schrodinger equation are predicted using symmetry adapted atomic orbitals features and a graph neural-network architecture.

BIG-bench Machine Learning

Causal Discovery in Physical Systems from Videos

1 code implementation NeurIPS 2020 Yunzhu Li, Antonio Torralba, Animashree Anandkumar, Dieter Fox, Animesh Garg

We assume access to different configurations and environmental conditions, i. e., data from unknown interventions on the underlying system; thus, we can hope to discover the correct underlying causal graph without explicit interventions.

Causal Discovery counterfactual

Convolutional Tensor-Train LSTM for Spatio-temporal Learning

2 code implementations NeurIPS 2020 Jiahao Su, Wonmin Byeon, Jean Kossaifi, Furong Huang, Jan Kautz, Animashree Anandkumar

Learning from spatio-temporal data has numerous applications such as human-behavior analysis, object tracking, video compression, and physics simulation. However, existing methods still perform poorly on challenging video tasks such as long-term forecasting.

 Ranked #1 on Video Prediction on KTH (Cond metric)

Activity Recognition Video Compression +1

Compositional Generalization with Tree Stack Memory Units

3 code implementations5 Nov 2019 Forough Arabshahi, Zhichu Lu, Pranay Mundra, Sameer Singh, Animashree Anandkumar

We study compositional generalization, viz., the problem of zero-shot generalization to novel compositions of concepts in a domain.

Mathematical Reasoning Zero-shot Generalization

Convolutional Tensor-Train LSTM for Long-Term Video Prediction

no code implementations25 Sep 2019 Jiahao Su, Wonmin Byeon, Furong Huang, Jan Kautz, Animashree Anandkumar

Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames. Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations.

Video Prediction

Regularized Learning for Domain Adaptation under Label Shifts

2 code implementations ICLR 2019 Kamyar Azizzadenesheli, Anqi Liu, Fanny Yang, Animashree Anandkumar

We derive a generalization bound for the classifier on the target domain which is independent of the (ambient) data dimensions, and instead only depends on the complexity of the function class.

Domain Adaptation

Neural Lander: Stable Drone Landing Control using Learned Dynamics

2 code implementations19 Nov 2018 Guanya Shi, Xichen Shi, Michael O'Connell, Rose Yu, Kamyar Azizzadenesheli, Animashree Anandkumar, Yisong Yue, Soon-Jo Chung

To the best of our knowledge, this is the first DNN-based nonlinear feedback controller with stability guarantees that can utilize arbitrarily large neural nets.

Policy Gradient in Partially Observable Environments: Approximation and Convergence

no code implementations18 Oct 2018 Kamyar Azizzadenesheli, Yisong Yue, Animashree Anandkumar

Deploying these tools, we generalize a variety of existing theoretical guarantees, such as policy gradient and convergence theorems, to partially observable domains, those which also could be carried to more settings of interest.

Decision Making Policy Gradient Methods

Surprising Negative Results for Generative Adversarial Tree Search

3 code implementations ICLR 2019 Kamyar Azizzadenesheli, Brandon Yang, Weitang Liu, Zachary C. Lipton, Animashree Anandkumar

We deploy this model and propose generative adversarial tree search (GATS) a deep RL algorithm that learns the environment model and implements Monte Carlo tree search (MCTS) on the learned model for planning.

Atari Games Reinforcement Learning (RL)

Question Type Guided Attention in Visual Question Answering

no code implementations ECCV 2018 Yang Shi, Tommaso Furlanello, Sheng Zha, Animashree Anandkumar

Visual Question Answering (VQA) requires integration of feature maps with drastically different structures and focus of the correct regions.

Activity Recognition Question Answering +2

Efficient Exploration through Bayesian Deep Q-Networks

1 code implementation ICLR 2018 Kamyar Azizzadenesheli, Animashree Anandkumar

This allows us to directly incorporate the uncertainty over the Q-function and deploy Thompson sampling on the learned posterior distribution resulting in efficient exploration/exploitation trade-off.

Atari Games Efficient Exploration +3

Combining Symbolic Expressions and Black-box Function Evaluations in Neural Programs

1 code implementation ICLR 2018 Forough Arabshahi, Sameer Singh, Animashree Anandkumar

This is because they mostly rely either on black-box function evaluations that do not capture the structure of the program, or on detailed execution traces that are expensive to obtain, and hence the training data has poor coverage of the domain under consideration.

valid

Deep Active Learning for Named Entity Recognition

2 code implementations WS 2017 Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, Animashree Anandkumar

In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning.

Active Learning named-entity-recognition +3

Experimental results : Reinforcement Learning of POMDPs using Spectral Methods

no code implementations7 May 2017 Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar

We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods.

reinforcement-learning Reinforcement Learning (RL)

Unsupervised learning of transcriptional regulatory networks via latent tree graphical models

no code implementations20 Sep 2016 Anthony Gitter, Furong Huang, Ragupathyraj Valluvan, Ernest Fraenkel, Animashree Anandkumar

We use a latent tree graphical model to analyze gene expression without relying on transcription factor expression as a proxy for regulator activity.

Open Problem: Approximate Planning of POMDPs in the class of Memoryless Policies

no code implementations17 Aug 2016 Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar

Generally in RL, one can assume a generative model, e. g. graphical models, for the environment, and then the task for the RL agent is to learn the model parameters and find the optimal strategy based on these learnt parameters.

Decision Making Reinforcement Learning (RL)

Online and Differentially-Private Tensor Decomposition

no code implementations NeurIPS 2016 Yining Wang, Animashree Anandkumar

In this paper, we resolve many of the key algorithmic questions regarding robustness, memory efficiency, and differential privacy of tensor decomposition.

Tensor Decomposition

Unsupervised Learning of Word-Sequence Representations from Scratch via Convolutional Tensor Decomposition

no code implementations10 Jun 2016 Furong Huang, Animashree Anandkumar

More importantly, it is challenging for pre-trained models to obtain word-sequence embeddings that are universally good for all downstream tasks or for any new datasets.

Dictionary Learning Sentence +1

Spectral Methods for Correlated Topic Models

no code implementations30 May 2016 Forough Arabshahi, Animashree Anandkumar

NID distributions are generated through the process of normalizing a family of independent Infinitely Divisible (ID) random variables.

Topic Models

Reinforcement Learning of POMDPs using Spectral Methods

no code implementations25 Feb 2016 Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar

We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods.

reinforcement-learning Reinforcement Learning (RL)

Discovering Neuronal Cell Types and Their Gene Expression Profiles Using a Spatial Point Process Mixture Model

no code implementations4 Feb 2016 Furong Huang, Animashree Anandkumar, Christian Borgs, Jennifer Chayes, Ernest Fraenkel, Michael Hawrylycz, Ed Lein, Alessandro Ingrosso, Srinivas Turaga

Single-cell RNA sequencing can now be used to measure the gene expression profiles of individual neurons and to categorize neurons based on their gene expression profiles.

Fast and Guaranteed Tensor Decomposition via Sketching

no code implementations NeurIPS 2015 Yining Wang, Hsiao-Yu Tung, Alexander Smola, Animashree Anandkumar

Such tensor contractions are encountered in decomposition methods such as tensor power iterations and alternating least squares.

Tensor Decomposition

Convolutional Dictionary Learning through Tensor Factorization

no code implementations10 Jun 2015 Furong Huang, Animashree Anandkumar

Tensor methods have emerged as a powerful paradigm for consistent learning of many latent variable models such as topic models, independent component analysis and dictionary learning.

Dictionary Learning Tensor Decomposition +1

Non-convex Robust PCA

no code implementations NeurIPS 2014 Praneeth Netrapalli, U. N. Niranjan, Sujay Sanghavi, Animashree Anandkumar, Prateek Jain

In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i. e., exponentially more iterations for the same accuracy.

Sample Complexity Analysis for Learning Overcomplete Latent Variable Models through Tensor Methods

no code implementations3 Aug 2014 Animashree Anandkumar, Rong Ge, Majid Janzamin

In the unsupervised setting, we use a simple initialization algorithm based on SVD of the tensor slices, and provide guarantees under the stricter condition that $k\le \beta d$ (where constant $\beta$ can be larger than $1$), where the tensor method recovers the components under a polynomial running time (and exponential in $\beta$).

Tensor Decomposition

Guaranteed Non-Orthogonal Tensor Decomposition via Alternating Rank-$1$ Updates

no code implementations21 Feb 2014 Animashree Anandkumar, Rong Ge, Majid Janzamin

In this paper, we provide local and global convergence guarantees for recovering CP (Candecomp/Parafac) tensor decomposition.

Tensor Decomposition

Nonparametric Estimation of Multi-View Latent Variable Models

no code implementations13 Nov 2013 Le Song, Animashree Anandkumar, Bo Dai, Bo Xie

We establish that the sample complexity for the proposed method is quadratic in the number of latent components and is a low order polynomial in the other relevant parameters.

Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization

no code implementations30 Oct 2013 Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth Netrapalli

Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps, keeping the other fixed.

A Clustering Approach to Learn Sparsely-Used Overcomplete Dictionaries

no code implementations8 Sep 2013 Alekh Agarwal, Animashree Anandkumar, Praneeth Netrapalli

We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements.

Clustering regression

Online Tensor Methods for Learning Latent Variable Models

1 code implementation3 Sep 2013 Furong Huang, U. N. Niranjan, Mohammad Umar Hakeem, Animashree Anandkumar

We introduce an online tensor decomposition based approach for two latent variable modeling problems namely, (1) community detection, in which we learn the latent communities that the social actors in social networks belong to, and (2) topic modeling, in which we infer hidden topics of text articles.

Community Detection Computational Efficiency +1

When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity

no code implementations NeurIPS 2013 Animashree Anandkumar, Daniel Hsu, Majid Janzamin, Sham Kakade

This set of higher-order expansion conditions allow for overcomplete models, and require the existence of a perfect matching from latent topics to higher order observed words.

Topic Models

High-Dimensional Covariance Decomposition into Sparse Markov and Independence Models

no code implementations5 Nov 2012 Majid Janzamin, Animashree Anandkumar

Fitting high-dimensional data involves a delicate tradeoff between faithful representation and the use of sparse models.

Vocal Bursts Intensity Prediction

Learning Topic Models and Latent Bayesian Networks Under Expansion Constraints

no code implementations24 Sep 2012 Animashree Anandkumar, Daniel Hsu, Adel Javanmard, Sham M. Kakade

The sufficient conditions for identifiability of these models are primarily based on weak expansion constraints on the topic-word matrix, for topic models, and on the directed acyclic graph, for Bayesian networks.

Topic Models

Learning loopy graphical models with latent variables: Efficient methods and guarantees

no code implementations17 Mar 2012 Animashree Anandkumar, Ragupathyraj Valluvan

The problem of structure estimation in graphical models with latent variables is considered.

A Method of Moments for Mixture Models and Hidden Markov Models

1 code implementation3 Mar 2012 Animashree Anandkumar, Daniel Hsu, Sham M. Kakade

Mixture models are a fundamental tool in applied statistics and machine learning for treating data taken from multiple subpopulations.

Spectral Methods for Learning Multivariate Latent Tree Structure

no code implementations NeurIPS 2011 Animashree Anandkumar, Kamalika Chaudhuri, Daniel J. Hsu, Sham M. Kakade, Le Song, Tong Zhang

The setting is one where we only have samples from certain observed variables in the tree, and our goal is to estimate the tree structure (i. e., the graph of how the underlying hidden variables are connected to each other and to the observed variables).

Cannot find the paper you are looking for? You can Submit a new open access paper.