We formalize the novel task of test completion to automatically complete the next statement in a test method based on the context of prior statements and the code under test.
We introduce a novel setting, wherein an agent needs to learn a task from a demonstration of a related task with the difference between the tasks communicated in natural language.
For the majority of the machine learning community, the expensive nature of collecting high-quality human-annotated data and the inability to efficiently finetune very large state-of-the-art pretrained models on limited compute are major bottlenecks for building models for new tasks.
To address these issues, we propose an Entity-Focused Retrieval (EnFoRe) model that provides stronger supervision during training and recognizes question-relevant entities to help retrieve more specific knowledge.
To our knowledge, this is the first work to show that simultaneously conditioning a multi-task robotic manipulation policy on both demonstration and language embeddings improves sample efficiency and generalization over conditioning on either modality alone.
We propose a novel form of "meta learning" that automatically learns interpretable rules that characterize the types of errors that a system makes, and demonstrate these rules' ability to help understand and improve two NLP systems.
Imitation learning and instruction-following are two common approaches to communicate a user's intent to a learning agent.
Descriptive code comments are essential for supporting code comprehension and maintenance.
For extrinsic evaluation, we show the usefulness of our approach by combining it with a comment update model to build a more comprehensive automatic comment maintenance system which can both detect and resolve inconsistent comments based on code changes.
Systematic Generalization refers to a learning algorithm's ability to extrapolate learned behavior to unseen situations that are distinct but semantically similar to its training data.
Reinforcement learning (RL), particularly in sparse reward settings, often requires prohibitively large numbers of interactions with the environment, thereby limiting its applicability to complex problems.
Most recent state-of-the-art Visual Question Answering (VQA) systems are opaque black boxes that are only trained to fit the answer distribution given the question and visual content.
Dialog systems research has primarily been focused around two main types of applications - task-oriented dialog systems that learn to use clarification to aid in understanding a goal, and open-ended dialog systems that are expected to carry out unconstrained "chit chat" conversations.
Intelligent systems need to be able to recover from mistakes, resolve uncertainty, and adapt to novel concepts not seen during training.
We formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies.
Comments are an integral part of software development; they are natural language descriptions associated with source code elements.
Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision.
Ranked #31 on Visual Question Answering (VQA) on VQA v2 test-std
Work on "learning with rationales" shows that humans providing explanations to a machine learning system can improve the system's predictive accuracy.
Visual Question Answering (VQA) deep-learning systems tend to capture superficial statistical correlations in the training data because of strong language priors and fail to generalize to test data with a significantly different question-answer (QA) distribution.
Ranked #6 on Visual Question Answering (VQA) on VQA-CP
A common approach to reduce interaction time with the environment is to use reward shaping, which involves carefully designing reward functions that provide the agent intermediate rewards for progress towards the goal.
Natural language understanding for robotics can require substantial domain- and platform-specific engineering.
Active learning identifies data points to label that are expected to be the most useful in improving a supervised model.
Answering visual questions need acquire daily common knowledge and model the semantic connection among different parts in images, which is too difficult for VQA systems to learn from images with the only supervision from answers.
Natural language understanding and dialog management are two integral components of interactive dialog systems.
We present results on combining supervised and unsupervised methods to ensemble multiple systems for two popular Knowledge Base Population (KBP) tasks, Cold Start Slot Filling (CSSF) and Tri-lingual Entity Discovery and Linking (TEDL).
There is a small but growing body of research on statistical scripts, models of event sequences that allow probabilistic inference of implicit events from documents.
In this paper, we focus on the three components of a practical system integrating logical and distributional models: 1) Parsing and task representation is the logic-based part where input problems are represented in probabilistic logic.
We present a novel framework for learning to interpret and generate language using only perceptual context as supervision.