Search Results for author: Samarth Sinha

Found 19 papers, 10 papers with code

Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning

2 code implementations NeurIPS 2021 Timo Milbich, Karsten Roth, Samarth Sinha, Ludwig Schmidt, Marzyeh Ghassemi, Björn Ommer

Finally, we propose few-shot DML as an efficient way to consistently improve generalization in response to unknown test shifts presented in ooDML.

Metric Learning

S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning

no code implementations10 Mar 2021 Samarth Sinha, Ajay Mandlekar, Animesh Garg

Offline reinforcement learning proposes to learn policies from large collected datasets without interacting with the physical environment.

Autonomous Driving Data Augmentation +4

Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos

no code implementations18 Jan 2021 Haoyu Xiong, Quanzhou Li, Yun-Chun Chen, Homanga Bharadhwaj, Samarth Sinha, Animesh Garg

Learning from visual data opens the potential to accrue a large range of manipulation behaviors by leveraging human demonstrations without specifying each of them mathematically, but rather through natural task specification.

Keypoint Detection Translation

Offline Policy Optimization with Variance Regularization

no code implementations1 Jan 2021 Riashat Islam, Samarth Sinha, Homanga Bharadhwaj, Samin Yeasar Arnob, Zhuoran Yang, Zhaoran Wang, Animesh Garg, Lihong Li, Doina Precup

Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications.

Continuous Control Offline RL +1

StackMix: A complementary Mix algorithm

no code implementations25 Nov 2020 John Chen, Samarth Sinha, Anastasios Kyrillidis

On its own, improvements with StackMix hold across different number of labeled samples on CIFAR-100, maintaining approximately a 2\% gap in test accuracy -- down to using only 5\% of the whole dataset -- and is effective in the semi-supervised setting with a 2\% improvement with the standard benchmark $\Pi$-model.

Contrastive Learning Data Augmentation +1

D2RL: Deep Dense Architectures in Reinforcement Learning

4 code implementations19 Oct 2020 Samarth Sinha, Homanga Bharadhwaj, Aravind Srinivas, Animesh Garg

While improvements in deep learning architectures have played a crucial role in improving the state of supervised and unsupervised learning in computer vision and natural language processing, neural network architecture choices for reinforcement learning remain relatively under-explored.

reinforcement-learning

Uniform Priors for Data-Efficient Transfer

no code implementations30 Jun 2020 Samarth Sinha, Karsten Roth, Anirudh Goyal, Marzyeh Ghassemi, Hugo Larochelle, Animesh Garg

Deep Neural Networks have shown great promise on a variety of downstream applications; but their ability to adapt and generalize to new data and tasks remains a challenge.

Domain Adaptation Meta-Learning +1

Experience Replay with Likelihood-free Importance Weights

1 code implementation23 Jun 2020 Samarth Sinha, Jiaming Song, Animesh Garg, Stefano Ermon

The use of past experiences to accelerate temporal difference (TD) learning of value functions, or experience replay, is a key component in deep reinforcement learning.

OpenAI Gym reinforcement-learning

Diversity inducing Information Bottleneck in Model Ensembles

1 code implementation10 Mar 2020 Samarth Sinha, Homanga Bharadhwaj, Anirudh Goyal, Hugo Larochelle, Animesh Garg, Florian Shkurti

Although deep learning models have achieved state-of-the-art performance on a number of vision tasks, generalization over high dimensional multi-modal data, and reliable predictive uncertainty estimation are still active areas of research.

Out-of-Distribution Detection

Curriculum By Smoothing

1 code implementation NeurIPS 2020 Samarth Sinha, Animesh Garg, Hugo Larochelle

We propose to augment the train-ing of CNNs by controlling the amount of high frequency information propagated within the CNNs as training progresses, by convolving the output of a CNN feature map of each layer with a Gaussian kernel.

Image Classification Transfer Learning

Revisiting Training Strategies and Generalization Performance in Deep Metric Learning

8 code implementations ICML 2020 Karsten Roth, Timo Milbich, Samarth Sinha, Prateek Gupta, Björn Ommer, Joseph Paul Cohen

Deep Metric Learning (DML) is arguably one of the most influential lines of research for learning visual similarities with many proposed approaches every year.

Metric Learning

Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples

2 code implementations NeurIPS 2020 Samarth Sinha, Zhengli Zhao, Anirudh Goyal, Colin Raffel, Augustus Odena

We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm that materially improves results with no increase in computational cost: When updating the generator parameters, we simply zero out the gradient contributions from the elements of the batch that the critic scores as `least realistic'.

Small-GAN: Speeding Up GAN Training Using Core-sets

no code implementations ICML 2020 Samarth Sinha, Han Zhang, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Augustus Odena

Recent work by Brock et al. (2018) suggests that Generative Adversarial Networks (GANs) benefit disproportionately from large mini-batch sizes.

Active Learning Anomaly Detection +1

Variational Adversarial Active Learning

6 code implementations ICCV 2019 Samarth Sinha, Sayna Ebrahimi, Trevor Darrell

Unlike conventional active learning algorithms, our approach is task agnostic, i. e., it does not depend on the performance of the task for which we are trying to acquire labeled data.

Active Learning Image Classification +1

Cross-Linked Variational Autoencoders for Generalized Zero-Shot Learning

no code implementations ICLR Workshop LLD 2019 Edgar Schönfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, Zeynep Akata

While following the same direction, we also take artificial feature generation one step further and propose a model where a shared latent space of image features and class embeddings is learned by aligned variational autoencoders, for the purpose of generating latent features to train a softmax classifier.

Few-Shot Learning Generalized Zero-Shot Learning

Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders

2 code implementations5 Dec 2018 Edgar Schönfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, Zeynep Akata

Many approaches in generalized zero-shot learning rely on cross-modal mapping between the image feature space and the class embedding space.

Few-Shot Learning Generalized Zero-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.