Search Results for author: Advait Rane

Found 3 papers, 1 papers with code

Pretrained Encoders are All You Need

1 code implementation ICML Workshop URL 2021 Mina Khan, P Srivatsa, Advait Rane, Shriram Chenniappa, Rishabh Anand, Sherjil Ozair, Pattie Maes

Data-efficiency and generalization are key challenges in deep learning and deep reinforcement learning as many models are trained on large-scale, domain-specific, and expensive-to-label datasets.

Contrastive Learning reinforcement-learning +1

Personalizing Pre-trained Models

no code implementations2 Jun 2021 Mina Khan, P Srivatsa, Advait Rane, Shriram Chenniappa, Asadali Hazariwala, Pattie Maes

Self-supervised or weakly supervised models trained on large-scale datasets have shown sample-efficient transfer to diverse datasets in few-shot settings.

Continual Learning Few-Shot Learning +2

Quantifying Synchronization in a Biologically Inspired Neural Network

no code implementations11 Dec 2020 Pranav Mahajan, Advait Rane, Swapna Sasi, Basabdatta Sen Bhattacharya

Our motivation for SyncBox is to understand the underlying dynamics in an existing population neural network, commonly referred to as neural mass models, that mimic Local Field Potentials of the visual thalamic tissue.

EEG Time Series

Cannot find the paper you are looking for? You can Submit a new open access paper.