no code implementations • 16 Dec 2022 • Vishal Pallagani, Bharath Muppasani, Keerthiram Murugesan, Francesca Rossi, Lior Horesh, Biplav Srivastava, Francesco Fabiano, Andrea Loreggia
Large Language Models (LLMs) have been the subject of active research, significantly advancing the field of Natural Language Processing (NLP).
no code implementations • 23 Oct 2022 • Heshan Fernando, Han Shen, Miao Liu, Subhajit Chaudhury, Keerthiram Murugesan, Tianyi Chen
Machine learning problems with multiple objective functions appear either in learning with multiple criteria where learning has to make a trade-off between multiple performance metrics such as fairness, safety and accuracy; or, in multi-task learning where multiple tasks are optimized jointly, sharing inductive bias between them.
no code implementations • 22 Aug 2022 • Tsuyoshi Idé, Keerthiram Murugesan, Djallel Bouneffouf, Naoki Abe
The proposed framework is designed to accommodate any number of feature vectors in the form of multi-mode tensor, thereby enabling to capture the heterogeneity that may exist over user preferences, products, and campaign strategies in a unified manner.
1 code implementation • 2 Feb 2022 • Keerthiram Murugesan, Vijay Sadashivaiah, Ronny Luss, Karthikeyan Shanmugam, Pin-Yu Chen, Amit Dhurandhar
Knowledge transfer between heterogeneous source and target networks and tasks has received a lot of attention in recent times as large amounts of quality labeled data can be difficult to obtain in many applications.
no code implementations • AAAI Workshop CLeaR 2022 • Kinjal Basu, Keerthiram Murugesan, Mattia Atzeni, Pavan Kapanipathi, Kartik Talamadupula, Tim Klinger, Murray Campbell, Mrinmaya Sachan, Gopal Gupta
These rules are learned in an online manner and applied with an ASP solver to predict an action for the agent.
Inductive logic programming
Natural Language Understanding
+1
no code implementations • ICLR 2022 • Mattia Atzeni, Shehzaad Dhuliawala, Keerthiram Murugesan, Mrinmaya Sachan
Text-based games (TBG) have emerged as promising environments for driving research in grounded language understanding and studying problems like generalization and sample efficiency.
Out-of-Distribution Generalization
reinforcement-learning
+2
no code implementations • ICLR 2022 • Keerthiram Murugesan, Vijay Sadashivaiah, Ronny Luss, Karthikeyan Shanmugam, Pin-Yu Chen, Amit Dhurandhar
Knowledge transfer between heterogeneous source and target networks and tasks has received a lot of attention in recent times as large amounts of quality labelled data can be difficult to obtain in many applications.
no code implementations • ACL 2021 • Keerthiram Murugesan, Mattia Atzeni, Pavan Kapanipathi, Kartik Talamadupula, Mrinmaya Sachan, Murray Campbell
Text-based games (TBGs) have emerged as useful benchmarks for evaluating progress at the intersection of grounded language understanding and reinforcement learning (RL).
no code implementations • 9 Jun 2021 • Keerthiram Murugesan, Subhajit Chaudhury, Kartik Talamadupula
This improves the agent's overall understanding of the game 'scene' and objects' relationships to the world around them, and the variety of visual representations on offer allow the agent to generate a better generalization of a relationship.
no code implementations • 12 Oct 2020 • Grady Booch, Francesco Fabiano, Lior Horesh, Kiran Kate, Jon Lenchner, Nick Linck, Andrea Loreggia, Keerthiram Murugesan, Nicholas Mattei, Francesca Rossi, Biplav Srivastava
This paper proposes a research direction to advance AI which draws inspiration from cognitive theories of human decision making.
2 code implementations • 8 Oct 2020 • Keerthiram Murugesan, Mattia Atzeni, Pavan Kapanipathi, Pushkar Shukla, Sadhana Kumaravel, Gerald Tesauro, Kartik Talamadupula, Mrinmaya Sachan, Murray Campbell
Text-based games have emerged as an important test-bed for Reinforcement Learning (RL) research, requiring RL agents to combine grounded language understanding with sequential decision making.
Ranked #1 on
Commonsense Reasoning for RL
on commonsense-rl
no code implementations • 12 Jul 2020 • Keerthiram Murugesan, Mattia Atzeni, Pavan Kapanipathi, Pushkar Shukla, Sadhana Kumaravel, Gerald Tesauro, Kartik Talamadupula, Mrinmaya Sachan, Murray Campbell
We introduce a number of RL agents that combine the sequential context with a dynamic graph representation of their beliefs of the world and commonsense knowledge from ConceptNet in different ways.
no code implementations • 2 May 2020 • Keerthiram Murugesan, Mattia Atzeni, Pushkar Shukla, Mrinmaya Sachan, Pavan Kapanipathi, Kartik Talamadupula
In this paper, we consider the recent trend of evaluating progress on reinforcement learning technology by using text-based environments and games as evaluation environments.
no code implementations • ICLR 2018 • Keerthiram Murugesan, Jaime Carbonell
Lifelong learning poses considerable challenges in terms of effectiveness (minimizing prediction errors for all tasks) and overall computational tractability for real-time performance.
no code implementations • NeurIPS 2017 • Keerthiram Murugesan, Jaime Carbonell
This paper addresses the challenge of learning from peers in an online multitask setting.
no code implementations • 3 Mar 2017 • Keerthiram Murugesan, Jaime Carbonell, Yiming Yang
This paper presents a new multitask learning framework that learns a shared representation among the tasks, incorporating both task and feature clusters.
no code implementations • 2 Mar 2017 • Keerthiram Murugesan, Jaime Carbonell
This paper introduces self-paced task selection to multitask learning, where instances from more closely related tasks are selected in a progression of easier-to-harder tasks, to emulate an effective human education strategy, but applied to multitask machine learning.
no code implementations • NeurIPS 2016 • Keerthiram Murugesan, Hanxiao Liu, Jaime Carbonell, Yiming Yang
This paper addresses the challenge of jointly learning both the per-task model parameters and the inter-task relationships in a multi-task online learning setting.
no code implementations • 10 Nov 2016 • Keerthiram Murugesan, Jaime Carbonell
The problem is formulated as a regularization-based approach called \textit{Multi-Task Multiple Kernel Relationship Learning} (\textit{MK-MTRL}), which models the task relationship matrix from the weights learned from latent feature spaces of task-specific base kernels.