1 code implementation • 20 Feb 2023 • Pengyu Nie, Rahul Banerjee, Junyi Jessy Li, Raymond J. Mooney, Milos Gligoric
We formalize the novel task of test completion to automatically complete the next statement in a test method based on the context of prior statements and the code under test.
no code implementations • 24 Jan 2023 • Prasoon Goyal, Raymond J. Mooney, Scott Niekum
We introduce a novel setting, wherein an agent needs to learn a task from a demonstration of a related task with the difference between the tasks communicated in natural language.
1 code implementation • 11 Nov 2022 • Sheena Panthaplackel, Milos Gligoric, Junyi Jessy Li, Raymond J. Mooney
Automatically fixing software bugs is a challenging task.
no code implementations • 3 Nov 2022 • Anuj Diwan, Puyuan Peng, Raymond J. Mooney
For the majority of the machine learning community, the expensive nature of collecting high-quality human-annotated data and the inability to efficiently finetune very large state-of-the-art pretrained models on limited compute are major bottlenecks for building models for new tasks.
no code implementations • 18 Oct 2022 • Jialin Wu, Raymond J. Mooney
To address these issues, we propose an Entity-Focused Retrieval (EnFoRe) model that provides stronger supervision during training and recognizes question-relevant entities to help retrieve more specific knowledge.
no code implementations • 10 Oct 2022 • Albert Yu, Raymond J. Mooney
To our knowledge, this is the first work to show that simultaneously conditioning a multi-task robotic manipulation policy on both demonstration and language embeddings improves sample efficiency and generalization over conditioning on either modality alone.
no code implementations • 13 Jan 2022 • Tong Gao, Shivang Singh, Raymond J. Mooney
We propose a novel form of "meta learning" that automatically learns interpretable rules that characterize the types of errors that a system makes, and demonstrate these rules' ability to help understand and improve two NLP systems.
1 code implementation • Findings (ACL) 2022 • Sheena Panthaplackel, Junyi Jessy Li, Milos Gligoric, Raymond J. Mooney
When a software bug is reported, developers engage in a discussion to collaboratively resolve it.
1 code implementation • ACL 2022 • Pengyu Nie, Jiyang Zhang, Junyi Jessy Li, Raymond J. Mooney, Milos Gligoric
This may lead to evaluations that are inconsistent with the intended use cases.
no code implementations • 5 Jun 2021 • Prasoon Goyal, Raymond J. Mooney, Scott Niekum
Imitation learning and instruction-following are two common approaches to communicate a user's intent to a learning agent.
no code implementations • 24 Mar 2021 • Jiyang Zhang, Sheena Panthaplackel, Pengyu Nie, Raymond J. Mooney, Junyi Jessy Li, Milos Gligoric
Descriptive code comments are essential for supporting code comprehension and maintenance.
1 code implementation • 4 Oct 2020 • Sheena Panthaplackel, Junyi Jessy Li, Milos Gligoric, Raymond J. Mooney
For extrinsic evaluation, we show the usefulness of our approach by combining it with a comment update model to build a more comprehensive automatic comment maintenance system which can both detect and resolve inconsistent comments based on code changes.
2 code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Tong Gao, Qi Huang, Raymond J. Mooney
Systematic Generalization refers to a learning algorithm's ability to extrapolate learned behavior to unseen situations that are distinct but semantically similar to its training data.
1 code implementation • ICML Workshop LaReL 2020 • Prasoon Goyal, Scott Niekum, Raymond J. Mooney
Reinforcement learning (RL), particularly in sparse reward settings, often requires prohibitively large numbers of interactions with the environment, thereby limiting its applicability to complex problems.
no code implementations • 28 Jun 2020 • Jialin Wu, Liyan Chen, Raymond J. Mooney
Most recent state-of-the-art Visual Question Answering (VQA) systems are opaque black boxes that are only trained to fit the answer distribution given the question and visual content.
no code implementations • 26 Jun 2020 • Aishwarya Padmakumar, Raymond J. Mooney
Dialog systems research has primarily been focused around two main types of applications - task-oriented dialog systems that learn to use clarification to aid in understanding a goal, and open-ended dialog systems that are expected to carry out unconstrained "chit chat" conversations.
no code implementations • 9 Jun 2020 • Aishwarya Padmakumar, Raymond J. Mooney
Intelligent systems need to be able to recover from mistakes, resolve uncertainty, and adapt to novel concepts not seen during training.
1 code implementation • ACL 2020 • Sheena Panthaplackel, Pengyu Nie, Milos Gligoric, Junyi Jessy Li, Raymond J. Mooney
We formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies.
no code implementations • 13 Dec 2019 • Sheena Panthaplackel, Milos Gligoric, Raymond J. Mooney, Junyi Jessy Li
Comments are an integral part of software development; they are natural language descriptions associated with source code elements.
no code implementations • 31 Oct 2019 • Jialin Wu, Raymond J. Mooney
Most RNN-based image captioning models receive supervision on the output words to mimic human captions.
no code implementations • ACL 2019 • Jialin Wu, Zeyuan Hu, Raymond J. Mooney
Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision.
Ranked #31 on
Visual Question Answering (VQA)
on VQA v2 test-std
no code implementations • WS 2019 • Julia Strout, Ye Zhang, Raymond J. Mooney
Work on "learning with rationales" shows that humans providing explanations to a machine learning system can improve the system's predictive accuracy.
1 code implementation • NeurIPS 2019 • Jialin Wu, Raymond J. Mooney
Visual Question Answering (VQA) deep-learning systems tend to capture superficial statistical correlations in the training data because of strong language priors and fail to generalize to test data with a significantly different question-answer (QA) distribution.
Ranked #6 on
Visual Question Answering (VQA)
on VQA-CP
1 code implementation • 5 Mar 2019 • Prasoon Goyal, Scott Niekum, Raymond J. Mooney
A common approach to reduce interaction time with the environment is to use reward shaping, which involves carefully designing reward functions that provide the agent intermediate rewards for progress towards the goal.
1 code implementation • 1 Mar 2019 • Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, Raymond J. Mooney
Natural language understanding for robotics can require substantial domain- and platform-specific engineering.
1 code implementation • WS 2019 • Jialin Wu, Raymond J. Mooney
AI systems' ability to explain their reasoning is critical to their utility and trustworthiness.
Ranked #5 on
Explanatory Visual Question Answering
on GQA-REX
no code implementations • EMNLP 2018 • Aishwarya Padmakumar, Peter Stone, Raymond J. Mooney
Active learning identifies data points to label that are expected to be the most useful in improving a supervised model.
no code implementations • 22 May 2018 • Jialin Wu, Zeyuan Hu, Raymond J. Mooney
Answering visual questions need acquire daily common knowledge and model the semantic connection among different parts in images, which is too difficult for VQA systems to learn from images with the only supervision from answers.
no code implementations • IJCNLP 2017 • Shobhit Chaurasia, Raymond J. Mooney
Generating computer code from natural language descriptions has been a long-standing problem.
1 code implementation • IJCNLP 2017 • Su Wang, Elisa Ferracane, Raymond J. Mooney
We explore techniques to maximize the effectiveness of discourse information in the task of authorship attribution.
no code implementations • EACL 2017 • Aishwarya Padmakumar, Jesse Thomason, Raymond J. Mooney
Natural language understanding and dialog management are two integral components of interactive dialog systems.
no code implementations • 27 May 2016 • Nazneen Fatema Rajani, Raymond J. Mooney
Ensembling methods are well known for improving prediction accuracy.
no code implementations • 16 Apr 2016 • Nazneen Fatema Rajani, Raymond J. Mooney
We present results on combining supervised and unsupervised methods to ensemble multiple systems for two popular Knowledge Base Population (KBP) tasks, Cold Start Slot Filling (CSSF) and Tri-lingual Entity Discovery and Linking (TEDL).
no code implementations • ACL 2016 • Karl Pichotta, Raymond J. Mooney
There is a small but growing body of research on statistical scripts, models of event sequences that allow probabilistic inference of implicit events from documents.
1 code implementation • CL 2016 • I. Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, Raymond J. Mooney
In this paper, we focus on the three components of a practical system integrating logical and distributional models: 1) Parsing and task representation is the logic-based part where input problems are represented in probabilistic logic.
no code implementations • 16 Jan 2014 • David L. Chen, Joohyun Kim, Raymond J. Mooney
We present a novel framework for learning to interpret and generate language using only perceptual context as supervision.