1 code implementation • 8 Aug 2023 • Yilun Chen, Zhiding Yu, Yukang Chen, Shiyi Lan, Animashree Anandkumar, Jiaya Jia, Jose Alvarez
For 3D object detection, we instantiate this method as FocalFormer3D, a simple yet effective detector that excels at excavating difficult objects and improving prediction recall.
Ranked #8 on
3D Object Detection
on nuScenes
no code implementations • 27 Jun 2023 • Zelun Luo, Yuliang Zou, Yijin Yang, Zane Durante, De-An Huang, Zhiding Yu, Chaowei Xiao, Li Fei-Fei, Animashree Anandkumar
In recent years, differential privacy has seen significant advancements in image classification; however, its application to video activity recognition remains under-explored.
1 code implementation • 24 Aug 2022 • Yoonwoo Jeong, Seungjoo Shin, Junha Lee, Christopher Choy, Animashree Anandkumar, Minsu Cho, Jaesik Park
The recent progress in implicit 3D representation, i. e., Neural Radiance Fields (NeRFs), has made accurate and photorealistic 3D reconstruction possible in a differentiable manner.
no code implementations • 8 Aug 2022 • Thorsten Kurth, Shashank Subramanian, Peter Harrington, Jaideep Pathak, Morteza Mardani, David Hall, Andrea Miele, Karthik Kashinath, Animashree Anandkumar
Extreme weather amplified by climate change is causing increasingly devastating impacts across the globe.
no code implementations • 16 Jun 2022 • David Hoeller, Nikita Rudin, Christopher Choy, Animashree Anandkumar, Marco Hutter
We propose a learning-based method to reconstruct the local terrain for locomotion with a mobile robot traversing urban environments.
no code implementations • 6 May 2022 • Dani Kiyasseh, Runzhuo Ma, Taseen F. Haque, Jessica Nguyen, Christian Wagner, Animashree Anandkumar, Andrew J. Hung
We believe this is a prerequisite for the provision of surgical feedback and modulation of surgeon performance in pursuit of improved patient outcomes.
6 code implementations • 22 Feb 2022 • Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, Pedram Hassanzadeh, Karthik Kashinath, Animashree Anandkumar
FourCastNet accurately forecasts high-resolution, fast-timescale variables such as the surface wind speed, precipitation, and atmospheric water vapor.
no code implementations • 27 Oct 2021 • Anuj Mahajan, Mikayel Samvelyan, Lei Mao, Viktor Makoviychuk, Animesh Garg, Jean Kossaifi, Shimon Whiteson, Yuke Zhu, Animashree Anandkumar
We present an extended abstract for the previously published work TESSERACT [Mahajan et al., 2021], which proposes a novel solution for Reinforcement Learning (RL) in large, factored action spaces using tensor decompositions.
Multi-agent Reinforcement Learning
reinforcement-learning
+2
1 code implementation • ICCV 2021 • Yoonwoo Jeong, Seokjun Ahn, Christopher Choy, Animashree Anandkumar, Minsu Cho, Jaesik Park
We also propose a new geometric loss function, viz., projected ray distance loss, to incorporate geometric consistency for complex non-linear camera models.
no code implementations • 31 May 2021 • Anuj Mahajan, Mikayel Samvelyan, Lei Mao, Viktor Makoviychuk, Animesh Garg, Jean Kossaifi, Shimon Whiteson, Yuke Zhu, Animashree Anandkumar
Algorithms derived from Tesseract decompose the Q-tensor across agents and utilise low-rank tensor approximations to model agent interactions relevant to the task.
1 code implementation • 18 May 2021 • Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu, Animashree Anandkumar
Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players.
Multi-agent Reinforcement Learning
reinforcement-learning
+4
no code implementations • 22 Dec 2020 • Xinlei Pan, Animesh Garg, Animashree Anandkumar, Yuke Zhu
Through experimentation and comparative study, we demonstrate the effectiveness of our approach in discovering robust and cost-efficient hand morphologies for grasping novel objects.
no code implementations • 16 Nov 2020 • Guanya Shi, Yifeng Zhu, Jonathan Tremblay, Stan Birchfield, Fabio Ramos, Animashree Anandkumar, Yuke Zhu
Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with sim2real transfer.
no code implementations • 5 Nov 2020 • Zhuoran Qiao, Feizhi Ding, Matthew Welborn, Peter J. Bygrave, Daniel G. A. Smith, Animashree Anandkumar, Frederick R. Manby, Thomas F. Miller III
We refine the OrbNet model to accurately predict energy, forces, and other response properties for molecules using a graph neural-network architecture based on features from low-cost approximated quantum operators in the symmetry-adapted atomic orbital basis.
1 code implementation • NeurIPS 2020 • Weili Nie, Zhiding Yu, Lei Mao, Ankit B. Patel, Yuke Zhu, Animashree Anandkumar
Inspired by the original one hundred BPs, we propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.
no code implementations • 21 Sep 2020 • Xingye Da, Zhaoming Xie, David Hoeller, Byron Boots, Animashree Anandkumar, Yuke Zhu, Buck Babich, Animesh Garg
We present a hierarchical framework that combines model-based control and reinforcement learning (RL) to synthesize robust controllers for a quadruped (the Unitree Laikago).
no code implementations • 26 Aug 2020 • Francisco Luongo, Ryan Hakim, Jessica H. Nguyen, Animashree Anandkumar, Andrew J Hung
For both gesture identification and classification datasets, we observed no effect of recurrent classification model choice (LSTM vs. convLSTM) on performance.
no code implementations • 16 Jul 2020 • Eric Zhao, Anqi Liu, Animashree Anandkumar, Yisong Yue
We address the problem of active learning under label shift: when the class proportions of source and target domains differ.
no code implementations • 15 Jul 2020 • Zhuoran Qiao, Matthew Welborn, Animashree Anandkumar, Frederick R. Manby, Thomas F. Miller III
We introduce a machine learning method in which energy solutions from the Schrodinger equation are predicted using symmetry adapted atomic orbitals features and a graph neural-network architecture.
1 code implementation • NeurIPS 2020 • Yunzhu Li, Antonio Torralba, Animashree Anandkumar, Dieter Fox, Animesh Garg
We assume access to different configurations and environmental conditions, i. e., data from unknown interventions on the underlying system; thus, we can hope to discover the correct underlying causal graph without explicit interventions.
2 code implementations • NeurIPS 2020 • Jiahao Su, Wonmin Byeon, Jean Kossaifi, Furong Huang, Jan Kautz, Animashree Anandkumar
Learning from spatio-temporal data has numerous applications such as human-behavior analysis, object tracking, video compression, and physics simulation. However, existing methods still perform poorly on challenging video tasks such as long-term forecasting.
Ranked #1 on
Video Prediction
on KTH
(Cond metric)
1 code implementation • 10 Dec 2019 • Francesca Baldini, Animashree Anandkumar, Richard M. Murray
In this work, we propose a new learning approach for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV).
3 code implementations • 5 Nov 2019 • Forough Arabshahi, Zhichu Lu, Pranay Mundra, Sameer Singh, Animashree Anandkumar
We study compositional generalization, viz., the problem of zero-shot generalization to novel compositions of concepts in a domain.
no code implementations • 25 Sep 2019 • Jiahao Su, Wonmin Byeon, Furong Huang, Jan Kautz, Animashree Anandkumar
Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames. Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations.
2 code implementations • ICLR 2019 • Kamyar Azizzadenesheli, Anqi Liu, Fanny Yang, Animashree Anandkumar
We derive a generalization bound for the classifier on the target domain which is independent of the (ambient) data dimensions, and instead only depends on the complexity of the function class.
no code implementations • 31 Jan 2019 • Yang Shi, Animashree Anandkumar
Count sketch is a simple popular sketch which uses a randomized hash function to achieve compression.
2 code implementations • 19 Nov 2018 • Guanya Shi, Xichen Shi, Michael O'Connell, Rose Yu, Kamyar Azizzadenesheli, Animashree Anandkumar, Yisong Yue, Soon-Jo Chung
To the best of our knowledge, this is the first DNN-based nonlinear feedback controller with stability guarantees that can utilize arbitrarily large neural nets.
no code implementations • 18 Oct 2018 • Kamyar Azizzadenesheli, Yisong Yue, Animashree Anandkumar
Deploying these tools, we generalize a variety of existing theoretical guarantees, such as policy gradient and convergence theorems, to partially observable domains, those which also could be carried to more settings of interest.
3 code implementations • ICLR 2019 • Kamyar Azizzadenesheli, Brandon Yang, Weitang Liu, Zachary C. Lipton, Animashree Anandkumar
We deploy this model and propose generative adversarial tree search (GATS) a deep RL algorithm that learns the environment model and implements Monte Carlo tree search (MCTS) on the learned model for planning.
no code implementations • ECCV 2018 • Yang Shi, Tommaso Furlanello, Sheng Zha, Animashree Anandkumar
Visual Question Answering (VQA) requires integration of feature maps with drastically different structures and focus of the correct regions.
1 code implementation • ICLR 2018 • Kamyar Azizzadenesheli, Animashree Anandkumar
This allows us to directly incorporate the uncertainty over the Q-function and deploy Thompson sampling on the learned posterior distribution resulting in efficient exploration/exploitation trade-off.
1 code implementation • ICLR 2018 • Forough Arabshahi, Sameer Singh, Animashree Anandkumar
This is because they mostly rely either on black-box function evaluations that do not capture the structure of the program, or on detailed execution traces that are expensive to obtain, and hence the training data has poor coverage of the domain under consideration.
2 code implementations • WS 2017 • Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, Animashree Anandkumar
In this work, we demonstrate that the amount of labeled training data can be drastically reduced when deep learning is combined with active learning.
no code implementations • 7 May 2017 • Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar
We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods.
no code implementations • 11 Nov 2016 • Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar
We derive finite-time regret bounds for our algorithm with a weak dependence on the dimensionality of the observed space.
no code implementations • 20 Sep 2016 • Anthony Gitter, Furong Huang, Ragupathyraj Valluvan, Ernest Fraenkel, Animashree Anandkumar
We use a latent tree graphical model to analyze gene expression without relying on transcription factor expression as a proxy for regulator activity.
no code implementations • 17 Aug 2016 • Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar
Generally in RL, one can assume a generative model, e. g. graphical models, for the environment, and then the task for the RL agent is to learn the model parameters and find the optimal strategy based on these learnt parameters.
no code implementations • NeurIPS 2016 • Yining Wang, Animashree Anandkumar
In this paper, we resolve many of the key algorithmic questions regarding robustness, memory efficiency, and differential privacy of tensor decomposition.
no code implementations • 10 Jun 2016 • Furong Huang, Animashree Anandkumar
More importantly, it is challenging for pre-trained models to obtain word-sequence embeddings that are universally good for all downstream tasks or for any new datasets.
no code implementations • 30 May 2016 • Forough Arabshahi, Animashree Anandkumar
NID distributions are generated through the process of normalizing a family of independent Infinitely Divisible (ID) random variables.
no code implementations • 25 Feb 2016 • Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar
We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods.
no code implementations • 4 Feb 2016 • Furong Huang, Animashree Anandkumar, Christian Borgs, Jennifer Chayes, Ernest Fraenkel, Michael Hawrylycz, Ed Lein, Alessandro Ingrosso, Srinivas Turaga
Single-cell RNA sequencing can now be used to measure the gene expression profiles of individual neurons and to categorize neurons based on their gene expression profiles.
no code implementations • 15 Oct 2015 • Animashree Anandkumar, Prateek Jain, Yang Shi, U. N. Niranjan
Robust tensor CP decomposition involves decomposing a tensor into low rank and sparse components.
no code implementations • NeurIPS 2015 • Yining Wang, Hsiao-Yu Tung, Alexander Smola, Animashree Anandkumar
Such tensor contractions are encountered in decomposition methods such as tensor power iterations and alternating least squares.
no code implementations • 10 Jun 2015 • Furong Huang, Animashree Anandkumar
Tensor methods have emerged as a powerful paradigm for consistent learning of many latent variable models such as topic models, independent component analysis and dictionary learning.
no code implementations • NeurIPS 2014 • Praneeth Netrapalli, U. N. Niranjan, Sujay Sanghavi, Animashree Anandkumar, Prateek Jain
In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i. e., exponentially more iterations for the same accuracy.
no code implementations • 3 Aug 2014 • Animashree Anandkumar, Rong Ge, Majid Janzamin
In the unsupervised setting, we use a simple initialization algorithm based on SVD of the tensor slices, and provide guarantees under the stricter condition that $k\le \beta d$ (where constant $\beta$ can be larger than $1$), where the tensor method recovers the components under a polynomial running time (and exponential in $\beta$).
no code implementations • 21 Feb 2014 • Animashree Anandkumar, Rong Ge, Majid Janzamin
In this paper, we provide local and global convergence guarantees for recovering CP (Candecomp/Parafac) tensor decomposition.
no code implementations • 13 Nov 2013 • Le Song, Animashree Anandkumar, Bo Dai, Bo Xie
We establish that the sample complexity for the proposed method is quadratic in the number of latent components and is a low order polynomial in the other relevant parameters.
no code implementations • 30 Oct 2013 • Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth Netrapalli
Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps, keeping the other fixed.
no code implementations • 8 Sep 2013 • Alekh Agarwal, Animashree Anandkumar, Praneeth Netrapalli
We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements.
1 code implementation • 3 Sep 2013 • Furong Huang, U. N. Niranjan, Mohammad Umar Hakeem, Animashree Anandkumar
We introduce an online tensor decomposition based approach for two latent variable modeling problems namely, (1) community detection, in which we learn the latent communities that the social actors in social networks belong to, and (2) topic modeling, in which we infer hidden topics of text articles.
no code implementations • NeurIPS 2013 • Animashree Anandkumar, Daniel Hsu, Majid Janzamin, Sham Kakade
This set of higher-order expansion conditions allow for overcomplete models, and require the existence of a perfect matching from latent topics to higher order observed words.
no code implementations • 5 Nov 2012 • Majid Janzamin, Animashree Anandkumar
Fitting high-dimensional data involves a delicate tradeoff between faithful representation and the use of sparse models.
no code implementations • 24 Sep 2012 • Animashree Anandkumar, Daniel Hsu, Adel Javanmard, Sham M. Kakade
The sufficient conditions for identifiability of these models are primarily based on weak expansion constraints on the topic-word matrix, for topic models, and on the directed acyclic graph, for Bayesian networks.
no code implementations • 17 Mar 2012 • Animashree Anandkumar, Ragupathyraj Valluvan
The problem of structure estimation in graphical models with latent variables is considered.
1 code implementation • 3 Mar 2012 • Animashree Anandkumar, Daniel Hsu, Sham M. Kakade
Mixture models are a fundamental tool in applied statistics and machine learning for treating data taken from multiple subpopulations.
no code implementations • NeurIPS 2011 • Animashree Anandkumar, Vincent Tan, Alan S. Willsky
We consider the problem of Ising and Gaussian graphical model selection given n i. i. d.
no code implementations • NeurIPS 2011 • Animashree Anandkumar, Kamalika Chaudhuri, Daniel J. Hsu, Sham M. Kakade, Le Song, Tong Zhang
The setting is one where we only have samples from certain observed variables in the tree, and our goal is to estimate the tree structure (i. e., the graph of how the underlying hidden variables are connected to each other and to the observed variables).