Search Results for author: Balaraman Ravindran

Found 90 papers, 29 papers with code

Deception in Reinforced Autonomous Agents: The Unconventional Rabbit Hat Trick in Legislation

no code implementations7 May 2024 Atharvan Dogra, Ameet Deshpande, John Nay, Tanmay Rajpurohit, Ashwin Kalyan, Balaraman Ravindran

Recent developments in large language models (LLMs), while offering a powerful foundation for developing natural language agents, raise safety concerns about them and the autonomous agents built upon them.

Deception Detection Philosophy

InSaAF: Incorporating Safety through Accuracy and Fairness | Are LLMs ready for the Indian Legal Domain?

no code implementations16 Feb 2024 Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S Krishnan, Shreya Goyal, Anmol Goel, Balaraman Ravindran, Ponnurangam Kumaraguru

Task performance and fairness scores of LLaMA and LLaMA--2 models indicate that the proposed $LSS_{\beta}$ metric can effectively determine the readiness of a model for safe usage in the legal sector.


LineConGraphs: Line Conversation Graphs for Effective Emotion Recognition using Graph Neural Networks

no code implementations4 Dec 2023 Gokul S Krishnan, Sarala Padi, Craig S. Greenberg, Balaraman Ravindran, Dinesh Manoch, Ram D. Sriram

To overcome these limitations, we propose novel line conversation graph convolutional network (LineConGCN) and graph attention (LineConGAT) models for ERC analysis.

Emotion Recognition Graph Attention +1

Multi-Agent Learning of Efficient Fulfilment and Routing Strategies in E-Commerce

no code implementations20 Nov 2023 Omkar Shelke, Pranavi Pathakota, Anandsingh Chauhan, Harshad Khadilkar, Hardik Meisheri, Balaraman Ravindran

This paper presents an integrated algorithmic framework for minimising product delivery costs in e-commerce (known as the cost-to-serve or C2S).

Decision Making

PolicyClusterGCN: Identifying Efficient Clusters for Training Graph Convolutional Networks

no code implementations25 Jun 2023 Saket Gurukar, Shaileshh Bojja Venkatakrishnan, Balaraman Ravindran, Srinivasan Parthasarathy

Specifically, the subgraph-based sampling approaches such as ClusterGCN and GraphSAINT have achieved state-of-the-art performance on the node classification tasks.

graph partitioning Node Classification +1

GAN-MPC: Training Model Predictive Controllers with Parameterized Cost Functions using Demonstrations from Non-identical Experts

1 code implementation30 May 2023 Returaj Burnwal, Anirban Santara, Nirav P. Bhatt, Balaraman Ravindran, Gaurav Aggarwal

We propose a novel approach that uses a generative adversarial network (GAN) to minimize the Jensen-Shannon divergence between the state-trajectory distributions of the demonstrator and the imitator.

Generative Adversarial Network Imitation Learning +1

Clustering Indices based Automatic Classification Model Selection

1 code implementation23 May 2023 Sudarsun Santhiappan, Nitin Shravan, Balaraman Ravindran

We also propose an end-to-end Automated ML system for data classification based on our model selection method.

Classification Clustering +2

MABL: Bi-Level Latent-Variable World Model for Sample-Efficient Multi-Agent Reinforcement Learning

no code implementations12 Apr 2023 Aravind Venugopal, Stephanie Milani, Fei Fang, Balaraman Ravindran

Unlike existing models, MABL is capable of encoding essential global information into the latent states during training while guaranteeing the decentralized execution of learned policies.

reinforcement-learning SMAC+

Are Models Trained on Indian Legal Data Fair?

no code implementations13 Mar 2023 Sahil Girhepuje, Anmol Goel, Gokul S Krishnan, Shreya Goyal, Satyendra Pandey, Ponnurangam Kumaraguru, Balaraman Ravindran

We highlight the propagation of learnt algorithmic biases in the bail prediction task for models trained on Hindi legal documents.


Physics-Informed Model-Based Reinforcement Learning

1 code implementation5 Dec 2022 Adithya Ramesh, Balaraman Ravindran

In these environments, the physics-informed version of our algorithm achieves significantly better average-return and sample efficiency.

Model-based Reinforcement Learning reinforcement-learning +1

GrabQC: Graph based Query Contextualization for automated ICD coding

no code implementations14 Jul 2022 Jeshuren Chelladurai, Sudarsun Santhiappan, Balaraman Ravindran

We propose to automate this manual process by automatically constructing a query for the IR system using the entities auto-extracted from the clinical notes.

Information Retrieval Retrieval

Multi-Variate Time Series Forecasting on Variable Subsets

1 code implementation25 Jun 2022 Jatin Chauhan, Aravindan Raghuveer, Rishi Saket, Jay Nandy, Balaraman Ravindran

Through systematic experiments across 4 datasets and 5 forecast models, we show that our technique is able to recover close to 95\% performance of the models even when only 15\% of the original variables are present.

Multivariate Time Series Forecasting Time Series

Matching options to tasks using Option-Indexed Hierarchical Reinforcement Learning

no code implementations12 Jun 2022 Kushal Chauhan, Soumya Chatterjee, Akash Reddy, Balaraman Ravindran, Pradeep Shenoy

The options framework in Hierarchical Reinforcement Learning breaks down overall goals into a combination of options or simpler tasks and associated policies, allowing for abstraction in the action space.

Continual Learning Hierarchical Reinforcement Learning +3

Evolutionary Approach to Security Games with Signaling

no code implementations29 Apr 2022 Adam Żychowski, Jacek Mańdziuk, Elizabeth Bondi, Aravind Venugopal, Milind Tambe, Balaraman Ravindran

Green Security Games have become a popular way to model scenarios involving the protection of natural resources, such as wildlife.

A Survey of Adversarial Defences and Robustness in NLP

no code implementations12 Mar 2022 Shreya Goyal, Sumanth Doddapaneni, Mitesh M. Khapra, Balaraman Ravindran

In the past few years, it has become increasingly evident that deep neural networks are not resilient enough to withstand adversarial perturbations in input data, leaving them vulnerable to attack.

Adversarial Defense named-entity-recognition +5

A Causal Approach for Unfair Edge Prioritization and Discrimination Removal

no code implementations29 Nov 2021 Pavan Ravishankar, Pranshu Malviya, Balaraman Ravindran

We prove this result for the non-trivial non-parametric model setting when the cumulative unfairness cannot be expressed in terms of edge unfairness.

Smooth Imitation Learning via Smooth Costs and Smooth Policies

no code implementations3 Nov 2021 Sapana Chaudhary, Balaraman Ravindran

We call our new smooth IL algorithm \textit{Smooth Policy and Cost Imitation Learning} (SPaCIL, pronounced 'Special').

Continuous Control Imitation Learning +1

Multi-Tailed, Multi-Headed, Spatial Dynamic Memory refined Text-to-Image Synthesis

no code implementations15 Oct 2021 Amrit Diggavi Seshadri, Balaraman Ravindran

Synthesizing high-quality, realistic images from text-descriptions is a challenging task, and current methods synthesize images from text in a multi-stage manner, typically by first generating a rough initial image and then refining image details at subsequent stages.

Image Generation

Semi-Supervised Deep Learning for Multiplex Networks

1 code implementation5 Oct 2021 Anasua Mitra, Priyesh Vijayan, Ranbir Sanasam, Diganta Goswami, Srinivasan Parthasarathy, Balaraman Ravindran

Multiplex networks are complex graph structures in which a set of entities are connected to each other via multiple types of relations, each relation representing a distinct layer.

Representation Learning

Causal Contextual Bandits with Targeted Interventions

no code implementations ICLR 2022 Chandrasekar Subramanian, Balaraman Ravindran

We study a contextual bandit setting where the learning agent has the ability to perform interventions on targeted subsets of the population, apart from possessing qualitative causal side-information.

Multi-Armed Bandits

A Joint Training Framework for Open-World Knowledge Graph Embeddings

no code implementations AKBC 2021 Karthik V, Beethika Tripathi, Mitesh M Khapra, Balaraman Ravindran

However, we find that existing approaches suffer from one or more of four drawbacks – 1) They are not modular with respect to the choice of the KG embedding model 2) They ignore best practices for aligning two embedding spaces 3) They do not account for differences in training strategy needed when presented with datasets with different description sizes and 4) They do not produce entity embeddings for use by downstream tasks.

Dialogue Generation Entity Embeddings +4

TAG: Task-based Accumulated Gradients for Lifelong learning

1 code implementation11 May 2021 Pranshu Malviya, Balaraman Ravindran, Sarath Chandar

We also show that our method performs better than several state-of-the-art methods in lifelong learning on complex datasets with a large number of tasks.

Continual Learning

qRRT: Quality-Biased Incremental RRT for Optimal Motion Planning in Non-Holonomic Systems

no code implementations7 Jan 2021 Nahas Pareekutty, Francis James, Balaraman Ravindran, Suril V. Shah

This paper presents a sampling-based method for optimal motion planning in non-holonomic systems in the absence of known cost functions.

Motion Planning Optimal Motion Planning +2

Neural Fitted Q Iteration based Optimal Bidding Strategy in Real Time Reactive Power Market_1

no code implementations7 Jan 2021 Jahnvi Patel, Devika Jay, Balaraman Ravindran, K. Shanti Swarup

In this paper, a pioneer work on learning optimal bidding strategies from observation and experience in a three-stage reactive power market is reported.

Stochastic Optimization

Reinforcement Learning for Unified Allocation and Patrolling in Signaling Games with Uncertainty

no code implementations18 Dec 2020 Aravind Venugopal, Elizabeth Bondi, Harshavardhan Kamarthi, Keval Dholakia, Balaraman Ravindran, Milind Tambe

We therefore first propose a novel GSG model that combines defender allocation, patrolling, real-time drone notification to human patrollers, and drones sending warning signals to attackers.

Decision Making Multiagent Systems

Relational Boosted Bandits

1 code implementation16 Dec 2020 Ashutosh Kakadiya, Sriraam Natarajan, Balaraman Ravindran

Contextual bandits algorithms have become essential in real-world user interaction problems in recent years.

Attribute Descriptive +3

Hypergraph Partitioning using Tensor Eigenvalue Decomposition

no code implementations16 Nov 2020 Deepak Maurya, Balaraman Ravindran

We also show improvement for the min-cut solution on 2-uniform hypergraphs (graphs) over the standard spectral partitioning algorithm.

graph partitioning hypergraph partitioning

Goal directed molecule generation using Monte Carlo Tree Search

no code implementations30 Oct 2020 Anand A. Rajasekar, Karthik Raman, Balaraman Ravindran

One challenging and essential task in biochemistry is the generation of novel molecules with desired properties.


MADRaS : Multi Agent Driving Simulator

no code implementations2 Oct 2020 Anirban Santara, Sohan Rudra, Sree Aditya Buridi, Meha Kaushik, Abhishek Naik, Bharat Kaul, Balaraman Ravindran

In this work, we present MADRaS, an open-source multi-agent driving simulator for use in the design and evaluation of motion planning algorithms for autonomous driving.

Autonomous Driving Car Racing +5

Reinforcement Learning for Improving Object Detection

no code implementations18 Aug 2020 Siddharth Nayak, Balaraman Ravindran

The performance of a trained object detection neural network depends a lot on the image quality.

Object object-detection +3

A Causal Linear Model to Quantify Edge Flow and Edge Unfairness for UnfairEdge Prioritization and Discrimination Removal

no code implementations10 Jul 2020 Pavan Ravishankar, Pranshu Malviya, Balaraman Ravindran

Unlike previous works that only make cautionary claims of discrimination and de-biases data after its generation, this paper attempts to prioritize unfair sources before mitigating their unfairness in the real-world.

On Incorporating Structural Information to improve Dialogue Response Generation

1 code implementation WS 2020 Nikita Moghe, Priyesh Vijayan, Balaraman Ravindran, Mitesh M. Khapra

This requires capturing structural, sequential and semantic information from the conversation context and the background resources.

Response Generation

Understanding Dynamic Scenes using Graph Convolution Networks

1 code implementation9 May 2020 Sravan Mylavarapu, Mahtab Sandhu, Priyesh Vijayan, K. Madhava Krishna, Balaraman Ravindran, Anoop Namboodiri

We present a novel Multi-Relational Graph Convolutional Network (MRGCN) based framework to model on-road vehicle behaviors from a sequence of temporally ordered frames as grabbed by a moving monocular camera.

Motion Segmentation Semantic Segmentation +1

Towards Transparent and Explainable Attention Models

2 code implementations ACL 2020 Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran

To make attention mechanisms more faithful and plausible, we propose a modified LSTM cell with a diversity-driven training objective that ensures that the hidden representations learned at different time steps are diverse.


EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks

1 code implementation ICLR 2020 Sanchari Sen, Balaraman Ravindran, Anand Raghunathan

Our results indicate that EMPIR boosts the average adversarial accuracies by 42. 6%, 15. 2% and 10. 5% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively, when compared to single full-precision models, without sacrificing accuracy on the unperturbed inputs.

Self-Driving Cars

SEERL: Sample Efficient Ensemble Reinforcement Learning

no code implementations15 Jan 2020 Rohan Saphal, Balaraman Ravindran, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul

However, ensemble methods are relatively less popular in reinforcement learning owing to the high sample complexity and computational expense involved in obtaining a diverse ensemble.

Continuous Control Ensemble Learning +3

Reinforcement Learning for Multi-Objective Optimization of Online Decisions in High-Dimensional Systems

no code implementations1 Oct 2019 Hardik Meisheri, Vinita Baniwal, Nazneen N Sultana, Balaraman Ravindran, Harshad Khadilkar

This paper describes a purely data-driven solution to a class of sequential decision-making problems with a large number of concurrent online decisions, with applications to computing systems and operations research.

Decision Making Management +2

Option Encoder: A Framework for Discovering a Policy Basis in Reinforcement Learning

no code implementations9 Sep 2019 Arjun Manoharan, Rahul Ramesh, Balaraman Ravindran

Option discovery and skill acquisition frameworks are integral to the functioning of a Hierarchically organized Reinforcement learning agent.

reinforcement-learning Reinforcement Learning (RL)

Let's Ask Again: Refine Network for Automatic Question Generation

1 code implementation IJCNLP 2019 Preksha Nema, Akash Kumar Mohankumar, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran

It is desired that the generated question should be (i) grammatically correct (ii) answerable from the passage and (iii) specific to the given answer.

Decoder Question Generation +1

Influence maximization in unknown social networks: Learning Policies for Effective Graph Sampling

1 code implementation8 Jul 2019 Harshavardhan Kamarthi, Priyesh Vijayan, Bryan Wilder, Balaraman Ravindran, Milind Tambe

A serious challenge when finding influential actors in real-world social networks is the lack of knowledge about the structure of the underlying network.

Graph Sampling

ExTra: Transfer-guided Exploration

no code implementations27 Jun 2019 Anirban Santara, Rishabh Madan, Balaraman Ravindran, Pabitra Mitra

Given an optimal policy in a related task-environment, we show that its bisimulation distance from the current task-environment gives a lower bound on the optimal advantage of state-action pairs in the current task-environment.

Learning Interpretable Models Using an Oracle

no code implementations17 Jun 2019 Abhishek Ghose, Balaraman Ravindran

Our work addresses this by: (a) showing that learning a training distribution (often different from the test distribution) can often increase accuracy of small models, and therefore may be used as a strategy to compensate for small sizes, and (b) providing a model-agnostic algorithm to learn such training distributions.

Sentence Embedding text-classification +1

MaMiC: Macro and Micro Curriculum for Robotic Reinforcement Learning

no code implementations17 May 2019 Manan Tomar, Akhil Sathuluri, Balaraman Ravindran

Shaping in humans and animals has been shown to be a powerful tool for learning complex tasks as compared to learning in a randomized fashion.

reinforcement-learning Reinforcement Learning (RL) +1

Successor Options: An Option Discovery Framework for Reinforcement Learning

1 code implementation14 May 2019 Rahul Ramesh, Manan Tomar, Balaraman Ravindran

This work adopts a complementary approach, where we attempt to discover options that navigate to landmark states.

Navigate reinforcement-learning +1

Interpretability with Accurate Small Models

no code implementations4 May 2019 Abhishek Ghose, Balaraman Ravindran

Our technique identifies the training data distribution to learn from that leads to the highest accuracy for a model of a given size.

Bayesian Optimization

Network Representation Learning: Consolidation and Renewed Bearing

1 code implementation2 May 2019 Saket Gurukar, Priyesh Vijayan, Aakash Srinivasan, Goonmeet Bajaj, Chen Cai, Moniba Keymanesh, Saravana Kumar, Pranav Maneriker, Anasua Mitra, Vedang Patel, Balaraman Ravindran, Srinivasan Parthasarathy

An important area of research that has emerged over the last decade is the use of graphs as a vehicle for non-linear dimensionality reduction in a manner akin to previous efforts based on manifold learning with uses for downstream database processing, machine learning and visualization.

Dimensionality Reduction General Classification +3

Polyphonic Music Composition with LSTM Neural Networks and Reinforcement Learning

no code implementations5 Feb 2019 Harish Kumar, Balaraman Ravindran

On top of our LSTM neural network that learnt musical sequences in this representation, we built an RL agent that learnt to find combinations of songs whose joint dominance produced pleasant compositions.

reinforcement-learning Reinforcement Learning (RL)

An Active Learning Framework for Efficient Robust Policy Search

no code implementations1 Jan 2019 Sai Kiran Narayanaswami, Nandan Sudarsanam, Balaraman Ravindran

Robust Policy Search is the problem of learning policies that do not degrade in performance when subject to unseen environment model parameters.

Active Learning Continuous Control +1

Hypergraph Clustering: A Modularity Maximization Approach

no code implementations28 Dec 2018 Tarun Kumar, Sankaran Vaidyanathan, Harini Ananthapadmanabhan, Srinivasan Parthasarathy, Balaraman Ravindran

Clustering on hypergraphs has been garnering increased attention with potential applications in network analysis, VLSI design and computer vision, among others.


Studying the Plasticity in Deep Convolutional Neural Networks using Random Pruning

1 code implementation26 Dec 2018 Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran

In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned.

Image Classification Image Segmentation +3

Discovering hierarchies using Imitation Learning from hierarchy aware policies

no code implementations1 Dec 2018 Ameet Deshpande, Harshavardhan Kamarthi, Balaraman Ravindran

Learning options that allow agents to exhibit temporally higher order behavior has proven to be useful in increasing exploration, reducing sample complexity and for various transfer scenarios.

Imitation Learning

Successor Options : An Option Discovery Algorithm for Reinforcement Learning

no code implementations27 Sep 2018 Manan Tomar*, Rahul Ramesh*, Balaraman Ravindran

Additionally, we describe an Incremental Successor options model that iteratively builds options and explores in environments where exploration through primitive actions is inadequate to form the Successor Representations.

Hierarchical Reinforcement Learning reinforcement-learning +1

Improvements on Hindsight Learning

no code implementations16 Sep 2018 Ameet Deshpande, Srikanth Sarma, Ashutosh Jha, Balaraman Ravindran

One such approach is Hindsight Experience replay which uses an off-policy Reinforcement Learning algorithm to learn a goal conditioned policy.

Policy Gradient Methods reinforcement-learning +1

Pack and Detect: Fast Object Detection in Videos Using Region-of-Interest Packing

no code implementations5 Sep 2018 Athindran Ramesh Kumar, Balaraman Ravindran, Anand Raghunathan

Based on these observations, we propose Pack and Detect (PaD), an approach to reduce the computational requirements of object detection in videos.

Object object-detection +4

HOPF: Higher Order Propagation Framework for Deep Collective Classification

1 code implementation31 May 2018 Priyesh Vijayan, Yash Chandak, Mitesh M. Khapra, Srinivasan Parthasarathy, Balaraman Ravindran

Given a graph where every node has certain attributes associated with it and some nodes have labels associated with them, Collective Classification (CC) is the task of assigning labels to every unlabeled node using information from the node as well as its neighbors.

Attribute Classification +1

Fusion Graph Convolutional Networks

1 code implementation31 May 2018 Priyesh Vijayan, Yash Chandak, Mitesh M. Khapra, Srinivasan Parthasarathy, Balaraman Ravindran

State-of-the-art models for node classification on such attributed graphs use differentiable recursive functions that enable aggregation and filtering of neighborhood information from multiple hops.

General Classification Node Classification

Language Expansion In Text-Based Games

no code implementations17 May 2018 Ghulam Ahmed Ansari, Sagar J P, Sarath Chandar, Balaraman Ravindran

Text-based games are suitable test-beds for designing agents that can learn by interaction with the environment in the form of natural language text.

reinforcement-learning Reinforcement Learning (RL) +1

DiGrad: Multi-Task Reinforcement Learning with Shared Actions

no code implementations27 Feb 2018 Parijat Dewangan, S Phaniteja, K. Madhava Krishna, Abhishek Sarkar, Balaraman Ravindran

In this paper, we propose a new approach for simultaneous training of multiple tasks sharing a set of common actions in continuous action spaces, which we call as DiGrad (Differential Policy Gradient).

Multi-Task Learning reinforcement-learning +1

Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks

1 code implementation31 Jan 2018 Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran

In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned.

Image Classification object-detection +1

Rate of Change Analysis for Interestingness Measures

no code implementations14 Dec 2017 Nandan Sudarsanam, Nishanth Kumar, Abhishek Sharma, Balaraman Ravindran

We present a comprehensive analysis of 50 interestingness measures and classify them in accordance with the two properties.

General Classification

Efficient-UCBV: An Almost Optimal Algorithm using Variance Estimates

no code implementations9 Nov 2017 Subhojyoti Mukherjee, K. P. Naveen, Nandan Sudarsanam, Balaraman Ravindran

We propose a novel variant of the UCB algorithm (referred to as Efficient-UCB-Variance (EUCBV)) for minimizing cumulative regret in the stochastic multi-armed bandit (MAB) setting.

Thompson Sampling

Shared Learning : Enhancing Reinforcement in $Q$-Ensembles

no code implementations14 Sep 2017 Rakesh R. Menon, Balaraman Ravindran

Deep Reinforcement Learning has been able to achieve amazing successes in a variety of domains from video games to continuous control by trying to maximize the cumulative reward.

Atari Games Continuous Control +3

RAIL: Risk-Averse Imitation Learning

1 code implementation20 Jul 2017 Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul

Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert's behavior is available as a fixed set of trajectories.

Autonomous Driving Continuous Control +1

Learning to Mix n-Step Returns: Generalizing lambda-Returns for Deep Reinforcement Learning

no code implementations ICLR 2018 Sahil Sharma, Girish Raguvir J, Srivatsan Ramesh, Balaraman Ravindran

Our second major contribution is that we propose a generalization of lambda-returns called Confidence-based Autodidactic Returns (CAR), wherein the RL agent learns the weighting of the n-step returns in an end-to-end manner.

Benchmarking Decision Making +2

Thresholding Bandits with Augmented UCB

no code implementations7 Apr 2017 Subhojyoti Mukherjee, K. P. Naveen, Nandan Sudarsanam, Balaraman Ravindran

In this paper we propose the Augmented-UCB (AugUCB) algorithm for a fixed-budget version of the thresholding bandit problem (TBP), where the objective is to identify a set of arms whose quality is above a threshold.

DyVEDeep: Dynamic Variable Effort Deep Neural Networks

no code implementations4 Apr 2017 Sanjay Ganapathy, Swagath Venkataramani, Balaraman Ravindran, Anand Raghunathan

Complementary to these approaches, DyVEDeep is a dynamic approach that exploits the heterogeneity in the inputs to DNNs to improve their compute efficiency with comparable classification accuracy.

Learning to Repeat: Fine Grained Action Repetition for Deep Reinforcement Learning

no code implementations20 Feb 2017 Sahil Sharma, Aravind Srinivas, Balaraman Ravindran

Reinforcement Learning algorithms can learn complex behavioral patterns for sequential decision making tasks wherein an agent interacts with an environment and acquires feedback in the form of rewards sampled from it.

Car Racing Decision Making +2

Learning to Multi-Task by Active Sampling

1 code implementation ICLR 2018 Sahil Sharma, Ashutosh Jha, Parikshit Hegde, Balaraman Ravindran

In this work, we propose an efficient multi-task learning framework which solves multiple goal-directed tasks in an on-line setup without the need for expert supervision.

Active Learning Meta-Learning +1

EPOpt: Learning Robust Neural Network Policies Using Model Ensembles

no code implementations5 Oct 2016 Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, Sergey Levine

Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks.

Domain Adaptation

Dynamic Frame skip Deep Q Network

no code implementations17 May 2016 Aravind Srinivas, Sahil Sharma, Balaraman Ravindran

Deep Reinforcement Learning methods have achieved state of the art performance in learning control policies for the games in the Atari 2600 domain.

Atari Games

Option Discovery in Hierarchical Reinforcement Learning using Spatio-Temporal Clustering

no code implementations17 May 2016 Aravind Srinivas, Ramnandan Krishnamurthy, Peeyush Kumar, Balaraman Ravindran

This paper introduces an automated skill acquisition framework in reinforcement learning which involves identifying a hierarchical description of the given task in terms of abstract states and extended actions between abstract states.

Clustering Hierarchical Reinforcement Learning +3

Linear Bandit algorithms using the Bootstrap

no code implementations4 May 2016 Nandan Sudarsanam, Balaraman Ravindran

One of the proposed methods, X-Random bootstrap, performs better than the baselines in-terms of cumulative regret across various degrees of noise and different number of trials.

Thompson Sampling

Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning

1 code implementation NAACL 2016 Janarthanan Rajendran, Mitesh M. Khapra, Sarath Chandar, Balaraman Ravindran

In this work, we address a real-world scenario where no direct parallel data is available between two views of interest (say, $V_1$ and $V_2$) but parallel data is available between each of these views and a pivot view ($V_3$).

Document Classification Representation Learning +2

Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources in the same domain

2 code implementations10 Oct 2015 Janarthanan Rajendran, Aravind Srinivas, Mitesh M. Khapra, P. Prasanna, Balaraman Ravindran

Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task.

TSEB: More Efficient Thompson Sampling for Policy Learning

no code implementations10 Oct 2015 P. Prasanna, Sarath Chandar, Balaraman Ravindran

In this paper, we propose TSEB, a Thompson Sampling based algorithm with adaptive exploration bonus that aims to solve the problem with tighter PAC guarantees, while being cautious on the regret as well.

Thompson Sampling

Correlational Neural Networks

2 code implementations27 Apr 2015 Sarath Chandar, Mitesh M. Khapra, Hugo Larochelle, Balaraman Ravindran

CCA based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace.

Representation Learning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.