Search Results for author: Aishwarya Padmakumar

Found 16 papers, 4 papers with code

Dialog Acts for Task Driven Embodied Agents

no code implementations SIGDIAL (ACL) 2022 Spandana Gella, Aishwarya Padmakumar, Patrick Lange, Dilek Hakkani-Tur

Embodied agents need to be able to interact in natural language – understanding task descriptions and asking appropriate follow up questions to obtain necessary information to be effective at successfully accomplishing tasks for a wide range of users.

Natural Language Understanding

VLN-Video: Utilizing Driving Videos for Outdoor Vision-and-Language Navigation

no code implementations5 Feb 2024 Jialu Li, Aishwarya Padmakumar, Gaurav Sukhatme, Mohit Bansal

Outdoor Vision-and-Language Navigation (VLN) requires an agent to navigate through realistic 3D outdoor environments based on natural language instructions.

Language Modelling Masked Language Modeling +2

Data-Efficient Alignment of Large Language Models with Human Feedback Through Natural Language

no code implementations24 Nov 2023 Di Jin, Shikib Mehri, Devamanyu Hazarika, Aishwarya Padmakumar, Sungjin Lee, Yang Liu, Mahdi Namazifar

Learning from human feedback is a prominent technique to align the output of large language models (LLMs) with human expectations.

Multimodal Contextualized Plan Prediction for Embodied Task Completion

no code implementations10 May 2023 Mert İnan, Aishwarya Padmakumar, Spandana Gella, Patrick Lange, Dilek Hakkani-Tur

Task planning is an important component of traditional robotics systems enabling robots to compose fine grained skills to perform more complex tasks.

KILM: Knowledge Injection into Encoder-Decoder Language Models

1 code implementation17 Feb 2023 Yan Xu, Mahdi Namazifar, Devamanyu Hazarika, Aishwarya Padmakumar, Yang Liu, Dilek Hakkani-Tür

Large pre-trained language models (PLMs) have been shown to retain implicit knowledge within their parameters.

Entity Disambiguation

Dialog Acts for Task-Driven Embodied Agents

no code implementations26 Sep 2022 Spandana Gella, Aishwarya Padmakumar, Patrick Lange, Dilek Hakkani-Tur

Embodied agents need to be able to interact in natural language understanding task descriptions and asking appropriate follow up questions to obtain necessary information to be effective at successfully accomplishing tasks for a wide range of users.

Natural Language Understanding

On the Limits of Evaluating Embodied Agent Model Generalization Using Validation Sets

no code implementations insights (ACL) 2022 Hyounghun Kim, Aishwarya Padmakumar, Di Jin, Mohit Bansal, Dilek Hakkani-Tur

Natural language guided embodied task completion is a challenging problem since it requires understanding natural language instructions, aligning them with egocentric visual observations, and choosing appropriate actions to execute in the environment to produce desired changes.

Rome was built in 1776: A Case Study on Factual Correctness in Knowledge-Grounded Response Generation

1 code implementation11 Oct 2021 Sashank Santhanam, Behnam Hedayatnia, Spandana Gella, Aishwarya Padmakumar, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur

We demonstrate the benefit of our Conv-FEVER dataset by showing that the models trained on this data perform reasonably well to detect factually inconsistent responses with respect to the provided knowledge through evaluation on our human annotated data.

Response Generation

TEACh: Task-driven Embodied Agents that Chat

3 code implementations1 Oct 2021 Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, Dilek Hakkani-Tur

Robots operating in human spaces must be able to engage in natural language interaction with people, both understanding and executing instructions, and using conversation to resolve ambiguity and recover from mistakes.

Dialogue Understanding

Dialog as a Vehicle for Lifelong Learning

no code implementations26 Jun 2020 Aishwarya Padmakumar, Raymond J. Mooney

Dialog systems research has primarily been focused around two main types of applications - task-oriented dialog systems that learn to use clarification to aid in understanding a goal, and open-ended dialog systems that are expected to carry out unconstrained "chit chat" conversations.

Position

Dialog Policy Learning for Joint Clarification and Active Learning Queries

no code implementations9 Jun 2020 Aishwarya Padmakumar, Raymond J. Mooney

Intelligent systems need to be able to recover from mistakes, resolve uncertainty, and adapt to novel concepts not seen during training.

Active Learning Image Retrieval +2

Learning a Policy for Opportunistic Active Learning

no code implementations EMNLP 2018 Aishwarya Padmakumar, Peter Stone, Raymond J. Mooney

Active learning identifies data points to label that are expected to be the most useful in improving a supervised model.

Active Learning Object +3

Cannot find the paper you are looking for? You can Submit a new open access paper.