Search Results for author: Ragav Venkatesan

Found 9 papers, 5 papers with code

Evaluating the Effectiveness of Efficient Neural Architecture Search for Sentence-Pair Tasks

no code implementations EMNLP (insights) 2020 Ansel MacLaughlin, Jwala Dhamala, Anoop Kumar, Sriram Venkatapathy, Ragav Venkatesan, Rahul Gupta

Neural Architecture Search (NAS) methods, which automatically learn entire neural model or individual neural cell architectures, have recently achieved competitive or state-of-the-art (SOTA) performance on variety of natural language processing and computer vision tasks, including language modeling, natural language inference, and image classification.

Image Classification Language Modelling +7

Out-of-the-box channel pruned networks

no code implementations30 Apr 2020 Ragav Venkatesan, Gurumurthy Swaminathan, Xiong Zhou, Anna Luo

We then demonstrate that if we found the profiles using a mid-sized dataset such as Cifar10/100, we are able to transfer them to even a large dataset such as Imagenet.

Reinforcement Learning (RL)

$d$-SNE: Domain Adaptation using Stochastic Neighborhood Embedding

2 code implementations29 May 2019 Xiang Xu, Xiong Zhou, Ragav Venkatesan, Gurumurthy Swaminathan, Orchid Majumder

Deep neural networks often require copious amount of labeled-data to train their scads of parameters.

Domain Adaptation

A Strategy for an Uncompromising Incremental Learner

1 code implementation2 May 2017 Ragav Venkatesan, Hemanth Venkateswara, Sethuraman Panchanathan, Baoxin Li

Using an implementation based on deep neural networks, we demonstrate that phantom sampling dramatically avoids catastrophic forgetting.

Class Incremental Learning Incremental Learning +1

Neural Dataset Generality

1 code implementation14 May 2016 Ragav Venkatesan, Vijetha Gattupalli, Baoxin Li

It is curious that while the filters learned by these CNNs are related to the atomic structures of the images from which they are learnt, all datasets learn similar looking low-level filters.

Transfer Learning

Diving deeper into mentee networks

1 code implementation27 Apr 2016 Ragav Venkatesan, Baoxin Li

We studied various characteristics of such networks and found some interesting behaviors.

Simpler non-parametric methods provide as good or better results to multiple-instance learning.

1 code implementation IEEE International Conference on Computer Vision 2015 Ragav Venkatesan, Parag Chandakkar, Baoxin Li

Multiple-instance learning (MIL) is a unique learning problem in which training data labels are available only for collections of objects (called bags) instead of individual objects (called instances).

Multiple Instance Learning

Simpler Non-Parametric Methods Provide as Good or Better Results to Multiple-Instance Learning

no code implementations ICCV 2015 Ragav Venkatesan, Parag Chandakkar, Baoxin Li

Multiple-instance learning (MIL) is a unique learning problem in which training data labels are available only for collections of objects (called bags) instead of individual objects (called instances).

Multiple Instance Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.