Search Results for author: Raymond J. Mooney

Found 36 papers, 14 papers with code

Self-Critical Reasoning for Robust Visual Question Answering

1 code implementation NeurIPS 2019 Jialin Wu, Raymond J. Mooney

Visual Question Answering (VQA) deep-learning systems tend to capture superficial statistical correlations in the training data because of strong language priors and fail to generalize to test data with a significantly different question-answer (QA) distribution.

Question Answering Visual Question Answering

Learning to Update Natural Language Comments Based on Code Changes

1 code implementation ACL 2020 Sheena Panthaplackel, Pengyu Nie, Milos Gligoric, Junyi Jessy Li, Raymond J. Mooney

We formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies.

Learning Deep Semantics for Test Completion

1 code implementation20 Feb 2023 Pengyu Nie, Rahul Banerjee, Junyi Jessy Li, Raymond J. Mooney, Milos Gligoric

We formalize the novel task of test completion to automatically complete the next statement in a test method based on the context of prior statements and the code under test.

Code Completion Code Generation

Using Natural Language for Reward Shaping in Reinforcement Learning

1 code implementation5 Mar 2019 Prasoon Goyal, Scott Niekum, Raymond J. Mooney

A common approach to reduce interaction time with the environment is to use reward shaping, which involves carefully designing reward functions that provide the agent intermediate rewards for progress towards the goal.

Montezuma's Revenge reinforcement-learning +1

Systematic Generalization on gSCAN with Language Conditioned Embedding

2 code implementations Asian Chapter of the Association for Computational Linguistics 2020 Tong Gao, Qi Huang, Raymond J. Mooney

Systematic Generalization refers to a learning algorithm's ability to extrapolate learned behavior to unseen situations that are distinct but semantically similar to its training data.

Systematic Generalization

Deep Just-In-Time Inconsistency Detection Between Comments and Source Code

1 code implementation4 Oct 2020 Sheena Panthaplackel, Junyi Jessy Li, Milos Gligoric, Raymond J. Mooney

For extrinsic evaluation, we show the usefulness of our approach by combining it with a comment update model to build a more comprehensive automatic comment maintenance system which can both detect and resolve inconsistent comments based on code changes.

PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards

1 code implementation ICML Workshop LaReL 2020 Prasoon Goyal, Scott Niekum, Raymond J. Mooney

Reinforcement learning (RL), particularly in sparse reward settings, often requires prohibitively large numbers of interactions with the environment, thereby limiting its applicability to complex problems.

reinforcement-learning Reinforcement Learning (RL) +1

Representing Meaning with a Combination of Logical and Distributional Models

1 code implementation CL 2016 I. Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, Raymond J. Mooney

In this paper, we focus on the three components of a practical system integrating logical and distributional models: 1) Parsing and task representation is the logic-based part where input problems are represented in probabilistic logic.

Lexical Entailment Natural Language Inference +2

Leveraging Discourse Information Effectively for Authorship Attribution

1 code implementation IJCNLP 2017 Su Wang, Elisa Ferracane, Raymond J. Mooney

We explore techniques to maximize the effectiveness of discourse information in the task of authorship attribution.

Authorship Attribution

Joint Image Captioning and Question Answering

no code implementations22 May 2018 Jialin Wu, Zeyuan Hu, Raymond J. Mooney

Answering visual questions need acquire daily common knowledge and model the semantic connection among different parts in images, which is too difficult for VQA systems to learn from images with the only supervision from answers.

Image Captioning Question Answering +1

Using Sentence-Level LSTM Language Models for Script Inference

no code implementations ACL 2016 Karl Pichotta, Raymond J. Mooney

There is a small but growing body of research on statistical scripts, models of event sequences that allow probabilistic inference of implicit events from documents.

Sentence

Supervised and Unsupervised Ensembling for Knowledge Base Population

no code implementations16 Apr 2016 Nazneen Fatema Rajani, Raymond J. Mooney

We present results on combining supervised and unsupervised methods to ensemble multiple systems for two popular Knowledge Base Population (KBP) tasks, Cold Start Slot Filling (CSSF) and Tri-lingual Entity Discovery and Linking (TEDL).

Knowledge Base Population slot-filling +1

Training a Multilingual Sportscaster: Using Perceptual Context to Learn Language

no code implementations16 Jan 2014 David L. Chen, Joohyun Kim, Raymond J. Mooney

We present a novel framework for learning to interpret and generate language using only perceptual context as supervision.

Descriptive Translation

Learning a Policy for Opportunistic Active Learning

no code implementations EMNLP 2018 Aishwarya Padmakumar, Peter Stone, Raymond J. Mooney

Active learning identifies data points to label that are expected to be the most useful in improving a supervised model.

Active Learning Object +3

Dialog for Language to Code

no code implementations IJCNLP 2017 Shobhit Chaurasia, Raymond J. Mooney

Generating computer code from natural language descriptions has been a long-standing problem.

Code Generation

Do Human Rationales Improve Machine Explanations?

no code implementations WS 2019 Julia Strout, Ye Zhang, Raymond J. Mooney

Work on "learning with rationales" shows that humans providing explanations to a machine learning system can improve the system's predictive accuracy.

BIG-bench Machine Learning General Classification +2

Hidden State Guidance: Improving Image Captioning using An Image Conditioned Autoencoder

no code implementations31 Oct 2019 Jialin Wu, Raymond J. Mooney

Most RNN-based image captioning models receive supervision on the output words to mimic human captions.

Image Captioning Sentence

Associating Natural Language Comment and Source Code Entities

no code implementations13 Dec 2019 Sheena Panthaplackel, Milos Gligoric, Raymond J. Mooney, Junyi Jessy Li

Comments are an integral part of software development; they are natural language descriptions associated with source code elements.

Dialog Policy Learning for Joint Clarification and Active Learning Queries

no code implementations9 Jun 2020 Aishwarya Padmakumar, Raymond J. Mooney

Intelligent systems need to be able to recover from mistakes, resolve uncertainty, and adapt to novel concepts not seen during training.

Active Learning Image Retrieval +2

Dialog as a Vehicle for Lifelong Learning

no code implementations26 Jun 2020 Aishwarya Padmakumar, Raymond J. Mooney

Dialog systems research has primarily been focused around two main types of applications - task-oriented dialog systems that learn to use clarification to aid in understanding a goal, and open-ended dialog systems that are expected to carry out unconstrained "chit chat" conversations.

Position

Improving VQA and its Explanations \\ by Comparing Competing Explanations

no code implementations28 Jun 2020 Jialin Wu, Liyan Chen, Raymond J. Mooney

Most recent state-of-the-art Visual Question Answering (VQA) systems are opaque black boxes that are only trained to fit the answer distribution given the question and visual content.

Question Answering Visual Question Answering

Zero-shot Task Adaptation using Natural Language

no code implementations5 Jun 2021 Prasoon Goyal, Raymond J. Mooney, Scott Niekum

Imitation learning and instruction-following are two common approaches to communicate a user's intent to a learning agent.

Imitation Learning Instruction Following

Towards Automated Error Analysis: Learning to Characterize Errors

no code implementations13 Jan 2022 Tong Gao, Shivang Singh, Raymond J. Mooney

We propose a novel form of "meta learning" that automatically learns interpretable rules that characterize the types of errors that a system makes, and demonstrate these rules' ability to help understand and improve two NLP systems.

Common Sense Reasoning Meta-Learning +2

Using Both Demonstrations and Language Instructions to Efficiently Learn Robotic Tasks

no code implementations10 Oct 2022 Albert Yu, Raymond J. Mooney

To our knowledge, this is the first work to show that simultaneously conditioning a multi-task robotic manipulation policy on both demonstration and language embeddings improves sample efficiency and generalization over conditioning on either modality alone.

Entity-Focused Dense Passage Retrieval for Outside-Knowledge Visual Question Answering

no code implementations18 Oct 2022 Jialin Wu, Raymond J. Mooney

To address these issues, we propose an Entity-Focused Retrieval (EnFoRe) model that provides stronger supervision during training and recognizes question-relevant entities to help retrieve more specific knowledge.

Passage Retrieval Question Answering +2

Zero-shot Video Moment Retrieval With Off-the-Shelf Models

no code implementations3 Nov 2022 Anuj Diwan, Puyuan Peng, Raymond J. Mooney

For the majority of the machine learning community, the expensive nature of collecting high-quality human-annotated data and the inability to efficiently finetune very large state-of-the-art pretrained models on limited compute are major bottlenecks for building models for new tasks.

Moment Retrieval Retrieval

Language-guided Task Adaptation for Imitation Learning

no code implementations24 Jan 2023 Prasoon Goyal, Raymond J. Mooney, Scott Niekum

We introduce a novel setting, wherein an agent needs to learn a task from a demonstration of a related task with the difference between the tasks communicated in natural language.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.