Search Results for author: P Srivatsa

Found 2 papers, 1 papers with code

Pretrained Encoders are All You Need

1 code implementation ICML Workshop URL 2021 Mina Khan, P Srivatsa, Advait Rane, Shriram Chenniappa, Rishabh Anand, Sherjil Ozair, Pattie Maes

Data-efficiency and generalization are key challenges in deep learning and deep reinforcement learning as many models are trained on large-scale, domain-specific, and expensive-to-label datasets.

Contrastive Learning reinforcement-learning +2

Personalizing Pre-trained Models

no code implementations2 Jun 2021 Mina Khan, P Srivatsa, Advait Rane, Shriram Chenniappa, Asadali Hazariwala, Pattie Maes

Self-supervised or weakly supervised models trained on large-scale datasets have shown sample-efficient transfer to diverse datasets in few-shot settings.

Continual Learning Few-Shot Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.