Search Results for author: Sarath Chandar

Found 47 papers, 26 papers with code

Learning to Navigate in Synthetically Accessible Chemical Space Using Reinforcement Learning

1 code implementation ICML 2020 Sai Krishna Gottipati, Boris Sattarov, Sufeng. Niu, Hao-Ran Wei, Yashaswi Pathak, Shengchao Liu, Simon Blackburn, Karam Thomas, Connor Coley, Jian Tang, Sarath Chandar, Yoshua Bengio

In this work, we propose a novel reinforcement learning (RL) setup for drug discovery that addresses this challenge by embedding the concept of synthetic accessibility directly into the de novo compound design system.

Drug Discovery reinforcement-learning

Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task ?

1 code implementation SIGDIAL (ACL) 2021 Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar

Predicting the next utterance in dialogue is contingent on encoding of users’ input text to generate appropriate and relevant response in data-driven approaches.

Text Generation

Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods

1 code implementation25 Apr 2022 Yi Wan, Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, Harm van Seijen

We empirically validate these insights in the case of linear function approximation by demonstrating that a modified version of linear Dyna achieves effective adaptation to local changes.

Model-based Reinforcement Learning reinforcement-learning

Improving Sample Efficiency of Value Based Models Using Attention and Vision Transformers

no code implementations1 Feb 2022 Amir Ardalan Kalantari, Mohammad Amini, Sarath Chandar, Doina Precup

Much of recent Deep Reinforcement Learning success is owed to the neural architecture's potential to learn and use effective internal representations of the world.

reinforcement-learning

An Empirical Investigation of the Role of Pre-training in Lifelong Learning

1 code implementation16 Dec 2021 Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, Emma Strubell

We investigate existing methods in the context of large, pre-trained models and evaluate their performance on a variety of text and image classification tasks, including a large-scale study using a novel dataset of 15 diverse NLP tasks.

Continual Learning Image Classification

Scaling Laws for the Few-Shot Adaptation of Pre-trained Image Classifiers

no code implementations13 Oct 2021 Gabriele Prato, Simon Guiroy, Ethan Caballero, Irina Rish, Sarath Chandar

Empirical science of neural scaling laws is a rapidly growing area of significant importance to the future of machine learning, particularly in the light of recent breakthroughs achieved by large-scale pre-trained models such as GPT-3, CLIP and DALL-e.

Few-Shot Learning Image Classification

Early-Stopping for Meta-Learning: Estimating Generalization from the Activation Dynamics

no code implementations29 Sep 2021 Simon Guiroy, Christopher Pal, Sarath Chandar

To this end, we empirically show that as meta-training progresses, a model's generalization to a target distribution of novel tasks can be estimated by analysing the dynamics of its neural activations.

Few-Shot Learning Transfer Learning

Post-hoc Interpretability for Neural NLP: A Survey

no code implementations10 Aug 2021 Andreas Madsen, Siva Reddy, Sarath Chandar

Neural networks for NLP are becoming increasingly complex and widespread, and there is a growing concern if these models are responsible to use.

Local Structure Matters Most: Perturbation Study in NLU

no code implementations Findings (ACL) 2022 Louis Clouatre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar

Recent research analyzing the sensitivity of natural language understanding models to word-order perturbations has shown that neural models are surprisingly insensitive to the order of words.

Natural Language Understanding

Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?

1 code implementation20 Jun 2021 Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar

Predicting the next utterance in dialogue is contingent on encoding of users' input text to generate appropriate and relevant response in data-driven approaches.

Text Generation

A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss

1 code implementation SIGDIAL (ACL) 2021 Prasanna Parthasarathi, Mohamed Abdelsalam, Joelle Pineau, Sarath Chandar

Neural models trained for next utterance generation in dialogue task learn to mimic the n-gram sequences in the training set with training objectives like negative log-likelihood (NLL) or cross-entropy.

Language Modelling Semantic Similarity +2

Memory Augmented Optimizers for Deep Learning

1 code implementation ICLR 2022 Paul-Aymeric McRae, Prasanna Parthasarathi, Mahmoud Assran, Sarath Chandar

Popular approaches for minimizing loss in data-driven learning often involve an abstraction or an explicit retention of the history of gradients for efficient parameter updates.

TAG: Task-based Accumulated Gradients for Lifelong learning

1 code implementation11 May 2021 Pranshu Malviya, Sarath Chandar, Balaraman Ravindran

We also show that our method performs better than several state-of-the-art methods in lifelong learning on complex datasets with a large number of tasks.

TAG

A Survey of Data Augmentation Approaches for NLP

1 code implementation Findings (ACL) 2021 Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard Hovy

In this paper, we present a comprehensive and unifying survey of data augmentation for NLP by summarizing the literature in a structured manner.

Data Augmentation

Out-of-Distribution Classification and Clustering

no code implementations1 Jan 2021 Gabriele Prato, Sarath Chandar

This includes left out classes from the same dataset, as well as entire datasets never trained on.

Classification General Classification +1

Maximum Reward Formulation In Reinforcement Learning

1 code implementation8 Oct 2020 Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Sahir, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E. Taylor, Sarath Chandar

Reinforcement learning (RL) algorithms typically deal with maximizing the expected cumulative return (discounted or undiscounted, finite or infinite horizon).

Drug Discovery reinforcement-learning

Slot Contrastive Networks: A Contrastive Approach for Representing Objects

no code implementations18 Jul 2020 Evan Racah, Sarath Chandar

Unsupervised extraction of objects from low-level visual data is an important goal for further progress in machine learning.

Atari Games

The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning

2 code implementations NeurIPS 2020 Harm van Seijen, Hadi Nekoei, Evan Racah, Sarath Chandar

For example, the common single-task sample-efficiency metric conflates improvements due to model-based learning with various other aspects, such as representation learning, making it difficult to assess true progress on model-based RL.

Model-based Reinforcement Learning reinforcement-learning +1

Chaotic Continual Learning

no code implementations ICML Workshop LifelongML 2020 Touraj Laleh, Mojtaba Faramarzi, Irina Rish, Sarath Chandar

Most proposed approaches for this issue try to compensate for the effects of parameter updates in the batch incremental setup in which the training model visits a lot of samples for several epochs.

Continual Learning

PatchUp: A Regularization Technique for Convolutional Neural Networks

1 code implementation14 Jun 2020 Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar

Large capacity deep learning models are often prone to a high generalization gap when trained with a limited amount of labeled training data.

Towards Lossless Encoding of Sentences

1 code implementation ACL 2019 Gabriele Prato, Mathieu Duchesneau, Sarath Chandar, Alain Tapp

A lot of work has been done in the field of image compression via machine learning, but not much attention has been given to the compression of natural language.

Image Compression Sentence Embeddings +1

Structure Learning for Neural Module Networks

no code implementations WS 2019 Vardaan Pahuja, Jie Fu, Sarath Chandar, Christopher J. Pal

In current formulations of such networks only the parameters of the neural modules and/or the order of their execution is learned.

Question Answering Visual Question Answering

Environments for Lifelong Reinforcement Learning

2 code implementations26 Nov 2018 Khimya Khetarpal, Shagun Sodhani, Sarath Chandar, Doina Precup

To achieve general artificial intelligence, reinforcement learning (RL) agents should learn not only to optimize returns for one specific task but also to constantly build more complex skills and scaffold their knowledge about the world, without forgetting what has already been learned.

reinforcement-learning

Towards Training Recurrent Neural Networks for Lifelong Learning

no code implementations16 Nov 2018 Shagun Sodhani, Sarath Chandar, Yoshua Bengio

Both these models are proposed in the context of feedforward networks and we evaluate the feasibility of using them for recurrent networks.

Language Expansion In Text-Based Games

no code implementations17 May 2018 Ghulam Ahmed Ansari, Sagar J P, Sarath Chandar, Balaraman Ravindran

Text-based games are suitable test-beds for designing agents that can learn by interaction with the environment in the form of natural language text.

reinforcement-learning text-based games

Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph

1 code implementation31 Jan 2018 Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, Sarath Chandar

Further, unlike existing large scale QA datasets which contain simple questions that can be answered from a single tuple, the questions in our dialogs require a larger subgraph of the KG.

Knowledge Graphs Question Answering

Memory Augmented Neural Networks with Wormhole Connections

no code implementations30 Jan 2017 Caglar Gulcehre, Sarath Chandar, Yoshua Bengio

We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences.

GuessWhat?! Visual object discovery through multi-modal dialogue

3 code implementations CVPR 2017 Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron Courville

Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images.

Object Discovery

Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes

no code implementations30 Jun 2016 Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, Yoshua Bengio

We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRUcontroller.

Natural Language Inference Question Answering

A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation

no code implementations COLING 2016 Amrita Saha, Mitesh M. Khapra, Sarath Chandar, Janarthanan Rajendran, Kyunghyun Cho

However, there is no parallel training data available between X and Y but, training data is available between X & Z and Z & Y (as is often the case in many real world applications).

Transliteration

Hierarchical Memory Networks

no code implementations24 May 2016 Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, Yoshua Bengio

In this paper, we explore a form of hierarchical memory network, which can be considered as a hybrid between hard and soft attention memory networks.

Hard Attention Question Answering

Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning

1 code implementation NAACL 2016 Janarthanan Rajendran, Mitesh M. Khapra, Sarath Chandar, Balaraman Ravindran

In this work, we address a real-world scenario where no direct parallel data is available between two views of interest (say, $V_1$ and $V_2$) but parallel data is available between each of these views and a pivot view ($V_3$).

Document Classification Representation Learning +1

TSEB: More Efficient Thompson Sampling for Policy Learning

no code implementations10 Oct 2015 P. Prasanna, Sarath Chandar, Balaraman Ravindran

In this paper, we propose TSEB, a Thompson Sampling based algorithm with adaptive exploration bonus that aims to solve the problem with tighter PAC guarantees, while being cautious on the regret as well.

Reasoning about Linguistic Regularities in Word Embeddings using Matrix Manifolds

no code implementations28 Jul 2015 Sridhar Mahadevan, Sarath Chandar

In this paper, we introduce a new approach to capture analogies in continuous word representations, based on modeling not just individual word vectors, but rather the subspaces spanned by groups of words.

Word Embeddings

Clustering is Efficient for Approximate Maximum Inner Product Search

no code implementations21 Jul 2015 Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua Bengio

Efficient Maximum Inner Product Search (MIPS) is an important task that has a wide applicability in recommendation systems and classification with a large number of classes.

Recommendation Systems Word Embeddings

Correlational Neural Networks

2 code implementations27 Apr 2015 Sarath Chandar, Mitesh M. Khapra, Hugo Larochelle, Balaraman Ravindran

CCA based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace.

Representation Learning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.