Search Results for author: Dan Roth

Found 307 papers, 86 papers with code

What Do Users Care About? Detecting Actionable Insights from User Feedback

no code implementations NAACL (ACL) 2022 Kasturi Bhattacharjee, Rashmi Gangadharaiah, Kathleen McKeown, Dan Roth

Users often leave feedback on a myriad of aspects of a product which, if leveraged successfully, can help yield useful insights that can lead to further improvements down the line.

There’s a Time and Place for Reasoning Beyond the Image

1 code implementation ACL 2022 Xingyu Fu, Ben Zhou, Ishaan Chandratreya, Carl Vondrick, Dan Roth

Images are often more significant than only the pixels to human eyes, as we can infer, associate, and reason with contextual information from other sources to establish a more complete picture.

16k Image Clustering

Yes, No or IDK: The Challenge of Unanswerable Yes/No Questions

no code implementations NAACL 2022 Elior Sulem, Jamaal Hay, Dan Roth

For example, given the context “She married a lawyer from New-York.”, we don’t know whether the answer to the question “Did she marry in New York?” is “Yes” or “No”.

Natural Language Understanding RTE

New Frontiers of Information Extraction

no code implementations NAACL (ACL) 2022 Muhao Chen, Lifu Huang, Manling Li, Ben Zhou, Heng Ji, Dan Roth

This tutorial targets researchers and practitioners who are interested in AI and ML technologies for structural information extraction (IE) from unstructured textual sources.

Capturing the Content of a Document through Complex Event Identification

no code implementations *SEM (NAACL) 2022 Zheng Qi, Elior Sulem, Haoyu Wang, Xiaodong Yu, Dan Roth

We address this task as a pipeline, first predicting whether two granular events mentioned in the text belong to the same complex event, independently of their position in the text, and then using this to cluster them into complex events.

Representation Learning

Few-Shot Novel Concept Learning for Semantic Parsing

no code implementations Findings (EMNLP) 2021 Soham Dan, Osbert Bastani, Dan Roth

This way the concept learning problem is naturally a program synthesis problem and our algorithm learns from a few examples to synthesize a program representing the novel concept.

Novel Concepts Program Synthesis +1

Compositional Data and Task Augmentation for Instruction Following

no code implementations Findings (EMNLP) 2021 Soham Dan, Xinran Han, Dan Roth

Executing natural language instructions in a physically grounded domain requires a model that understands both spatial concepts such as “left of” and “above”, and the compositional language used to identify landmarks and articulate instructions relative to them.

Instruction Following

On the Effects of Transformer Size on In- and Out-of-Domain Calibration

no code implementations Findings (EMNLP) 2021 Soham Dan, Dan Roth

To reduce the cost of training such large models, prior work has developed smaller, more compact models which achieves a significant speedup in training time while maintaining competitive accuracy to the original model on downstream tasks.

Do We Know What We Don’t Know? Studying Unanswerable Questions beyond SQuAD 2.0

no code implementations Findings (EMNLP) 2021 Elior Sulem, Jamaal Hay, Dan Roth

Understanding when a text snippet does not provide a sought after information is an essential part of natural language utnderstanding.

RTE

PerKGQA: Question Answering over Personalized Knowledge Graphs

no code implementations Findings (NAACL) 2022 Ritam Dutt, Kasturi Bhattacharjee, Rashmi Gangadharaiah, Dan Roth, Carolyn Rose

The above concerns motivate our question answer- ing setting over personalized knowledge graphs (PERKGQA) where each user has restricted access to their KG.

Knowledge Graphs Question Answering

Understanding the Extent to which Content Quality Metrics Measure the Information Quality of Summaries

no code implementations CoNLL (EMNLP) 2021 Daniel Deutsch, Dan Roth

Reference-based metrics such as ROUGE or BERTScore evaluate the content quality of a summary by comparing the summary to a reference.

Question Answering

ESTER: A Machine Reading Comprehension Dataset for Reasoning about Event Semantic Relations

no code implementations EMNLP 2021 Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng

While these tasks partially evaluate machines’ ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.

Machine Reading Comprehension Natural Language Queries +1

Building Low-Resource NER Models Using Non-Speaker Annotations

no code implementations NAACL (DaSH) 2021 Tatiana Tsygankova, Francesca Marini, Stephen Mayhew, Dan Roth

In low-resource natural language processing (NLP), the key problems are a lack of target language training data, and a lack of native speakers to create it.

Low Resource Named Entity Recognition named-entity-recognition +2

Quantifying Clinical Outcome Measures in Patients with Epilepsy Using the Electronic Health Record

no code implementations BioNLP (ACL) 2022 Kevin Xie, Brian Litt, Dan Roth, Colin A. Ellis

A wealth of important clinical information lies untouched in the Electronic Health Record, often in the form of unstructured textual documents.

Text Summarization

Can we Retrieve Everything All at Once? ARM: An Alignment-Oriented LLM-based Retrieval Method

no code implementations30 Jan 2025 Peter Baile Chen, Yi Zhang, Michael Cafarella, Dan Roth

However, LLM's decomposition of questions is unaware of what data is available and how data is organized, often leading to a sub-optimal retrieval performance.

RAG Retrieval

ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding

no code implementations9 Jan 2025 Xingyu Fu, Minqian Liu, Zhengyuan Yang, John Corring, Yijuan Lu, Jianwei Yang, Dan Roth, Dinei Florencio, Cha Zhang

ReFocus largely improves performance on all tasks over GPT-4o without visual editing, yielding an average gain of 11. 0% on table tasks and 6. 8% on chart tasks.

Visual Question Answering (VQA) Visual Reasoning

DiverseAgentEntropy: Quantifying Black-Box LLM Uncertainty through Diverse Perspectives and Multi-Agent Interaction

no code implementations12 Dec 2024 Yu Feng, Phu Mon Htut, Zheng Qi, Wei Xiao, Manuel Mager, Nikolaos Pappas, Kishaloy Halder, Yang Li, Yassine Benajiba, Dan Roth

In this paper, we propose a novel method, DiverseAgentEntropy, for evaluating a model's uncertainty using multi-agent interaction under the assumption that if a model is certain, it should consistently recall the answer to the original query across a diverse collection of questions about the same original query.

Contextualized Evaluations: Taking the Guesswork Out of Language Model Evaluations

no code implementations11 Nov 2024 Chaitanya Malaviya, Joseph Chee Chang, Dan Roth, Mohit Iyyer, Mark Yatskar, Kyle Lo

would depend on the user's preferences, and a good response to an open-ended query like "How do antibiotics work against bacteria?"

Language Modeling Language Modelling

Benchmarking LLM Guardrails in Handling Multilingual Toxicity

no code implementations29 Oct 2024 Yahan Yang, Soham Dan, Dan Roth, Insup Lee

With the ubiquity of Large Language Models (LLMs), guardrails have become crucial to detect and defend against toxic content.

Benchmarking

ReasonAgain: Using Extractable Symbolic Programs to Evaluate Mathematical Reasoning

no code implementations24 Oct 2024 Xiaodong Yu, Ben Zhou, Hao Cheng, Dan Roth

Existing math datasets evaluate the reasoning abilities of large language models (LLMs) by either using the final answer or the intermediate reasoning steps derived from static examples.

GSM8K Math +1

Open Domain Question Answering with Conflicting Contexts

no code implementations16 Oct 2024 Siyi Liu, Qiang Ning, Kishaloy Halder, Wei Xiao, Zheng Qi, Phu Mon Htut, Yi Zhang, Neha Anna John, Bonan Min, Yassine Benajiba, Dan Roth

Open domain question answering systems frequently rely on information retrieved from large collections of text (such as the Web) to answer questions.

Open-Domain Question Answering

GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation

no code implementations11 Oct 2024 Jiashu He, Mingyu Derek Ma, Jinxuan Fan, Dan Roth, Wei Wang, Alejandro Ribeiro

Existing retrieval-based reasoning approaches for large language models (LLMs) heavily rely on the density and quality of the non-parametric knowledge source to provide domain knowledge and explicit reasoning chain.

Knowledge Graphs Response Generation +1

Beyond correlation: The Impact of Human Uncertainty in Measuring the Effectiveness of Automatic Evaluation and LLM-as-a-Judge

1 code implementation3 Oct 2024 Aparna Elangovan, Lei Xu, Jongwoo Ko, Mahsa Elyasi, Ling Liu, Sravan Bodapati, Dan Roth

Specifically, we demonstrate that when the proportion of samples with variation or uncertainty in human assigned labels is relatively high, machine labels (generated by automatic evaluation methods) may superficially appear to have similar or better correlation with the human majority label compared to the human-to-human (HH) correlation.

Synatra: Turning Indirect Knowledge into Direct Demonstrations for Digital Agents at Scale

no code implementations24 Sep 2024 Tianyue Ou, Frank F. Xu, Aman Madaan, Jiarui Liu, Robert Lo, Abishek Sridhar, Sudipta Sengupta, Dan Roth, Graham Neubig, Shuyan Zhou

LLMs can now act as autonomous agents that interact with digital environments and complete specific objectives (e. g., arranging an online meeting).

Model Tells Itself Where to Attend: Faithfulness Meets Automatic Attention Steering

no code implementations16 Sep 2024 Qingru Zhang, Xiaodong Yu, Chandan Singh, Xiaodong Liu, Liyuan Liu, Jianfeng Gao, Tuo Zhao, Dan Roth, Hao Cheng

However, they often struggle to fully comprehend and effectively utilize their input contexts, resulting in responses that are unfaithful or hallucinated.

MAPWise: Evaluating Vision-Language Models for Advanced Map Queries

no code implementations30 Aug 2024 Srija Mukhopadhyay, Abhishek Rajgaria, Prerana Khatiwada, Vivek Gupta, Dan Roth

Vision-language models (VLMs) excel at tasks requiring joint understanding of visual and linguistic information.

Question Answering

Enhancing Temporal Understanding in LLMs for Semi-structured Tables

no code implementations22 Jul 2024 Irwin Deng, Kushagra Dixit, Vivek Gupta, Dan Roth

We provide critical insights for improving LLM performance in temporal reasoning tasks with tabular data.

Question Answering

NTSEBENCH: Cognitive Reasoning Benchmark for Vision Language Models

no code implementations15 Jul 2024 Pranshu Pandya, Vatsal Gupta, Agney S Talwarr, Tushar Kataria, Dan Roth, Vivek Gupta

Cognitive textual and visual reasoning tasks, including puzzles, series, and analogies, demand the ability to quickly reason, decipher, and evaluate patterns both textually and spatially.

Common Sense Reasoning Multiple-choice +1

On Characterizing and Mitigating Imbalances in Multi-Instance Partial Label Learning

no code implementations13 Jul 2024 Kaifu Wang, Efthymia Tsamoura, Dan Roth

At the same time, the supervision signal is generated by a function $\sigma$ over the (hidden) gold labels of $\mathbf{x}$.

Long-tail Learning Partial Label Learning +1

H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables

1 code implementation29 Jun 2024 Nikhil Abhyankar, Vivek Gupta, Dan Roth, Chandan K. Reddy

Tabular reasoning involves interpreting natural language queries about tabular data, which presents a unique challenge of combining language understanding with structured data analysis.

Fact Verification Mathematical Reasoning +4

FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts

no code implementations27 Jun 2024 Shubhankar Singh, Purvi Chaurasia, Yerram Varun, Pranshu Pandya, Vatsal Gupta, Vivek Gupta, Dan Roth

Existing benchmarks for visual question answering lack in visual grounding and complexity, particularly in evaluating spatial reasoning skills.

Decision Making Logical Reasoning +3

A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners

1 code implementation16 Jun 2024 Bowen Jiang, Yangxinyu Xie, Zhuoqun Hao, Xiaomeng Wang, Tanwi Mallick, Weijie J. Su, Camillo J. Taylor, Dan Roth

This study introduces a hypothesis-testing framework to assess whether large language models (LLMs) possess genuine reasoning abilities or primarily depend on token bias.

Logical Reasoning

Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models

no code implementations13 Jun 2024 Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, Ranjay Krishna

In this work, we introduce Sketchpad, a framework that gives multimodal LMs a visual sketchpad and tools to draw on the sketchpad.

Math object-detection +3

Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense?

no code implementations11 Jun 2024 Xingyu Fu, Muyu He, Yujie Lu, William Yang Wang, Dan Roth

We present a novel task and benchmark for evaluating the ability of text-to-image(T2I) generation models to produce images that align with commonsense in real life, which we call Commonsense-T2I.

Adversarial Text Text-to-Image Generation +1

ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models

no code implementations28 May 2024 Aparna Elangovan, Ling Liu, Lei Xu, Sravan Bodapati, Dan Roth

In this position paper, we argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking that draws upon insights from disciplines such as user experience research and human behavioral psychology to ensure that the experimental design and results are reliable.

Experimental Design

Devil's Advocate: Anticipatory Reflection for LLM Agents

no code implementations25 May 2024 Haoyu Wang, Tao Li, Zhiwei Deng, Dan Roth, Yang Li

The experimental results suggest that our introspection-driven approach not only enhances the agent's ability to navigate unanticipated challenges through a robust mechanism of plan execution, but also improves efficiency by reducing the number of trials and plan revisions by 45% needed to achieve a task.

Navigate

BLINK: Multimodal Large Language Models Can See but Not Perceive

no code implementations18 Apr 2024 Xingyu Fu, Yushi Hu, Bangzheng Li, Yu Feng, Haoyu Wang, Xudong Lin, Dan Roth, Noah A. Smith, Wei-Chiu Ma, Ranjay Krishna

We introduce Blink, a new benchmark for multimodal language models (LLMs) that focuses on core visual perception abilities not found in other evaluations.

Depth Estimation Multiple-choice +1

Fewer Truncations Improve Language Modeling

no code implementations16 Apr 2024 Hantian Ding, Zijian Wang, Giovanni Paolini, Varun Kumar, Anoop Deoras, Dan Roth, Stefano Soatto

In large language model training, input documents are typically concatenated together and then split into sequences of equal length to avoid padding tokens.

Combinatorial Optimization Hallucination +5

Is Table Retrieval a Solved Problem? Exploring Join-Aware Multi-Table Retrieval

no code implementations15 Apr 2024 Peter Baile Chen, Yi Zhang, Dan Roth

Retrieving relevant tables containing the necessary information to accurately answer a given question over tables is critical to open-domain question-answering (QA) systems.

Open-Domain Question Answering Re-Ranking +1

Conceptual and Unbiased Reasoning in Language Models

no code implementations30 Mar 2024 Ben Zhou, Hongming Zhang, Sihao Chen, Dian Yu, Hongwei Wang, Baolin Peng, Dan Roth, Dong Yu

Conceptual reasoning, the ability to reason in abstract and high-level perspectives, is key to generalization in human cognition.

Decision Making

From Instructions to Constraints: Language Model Alignment with Automatic Constraint Verification

no code implementations10 Mar 2024 Fei Wang, Chao Shang, Sarthak Jain, Shuai Wang, Qiang Ning, Bonan Min, Vittorio Castelli, Yassine Benajiba, Dan Roth

We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments, and propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints.

Abstractive Text Summarization Entity Typing +3

Evaluating LLMs' Mathematical Reasoning in Financial Document Question Answering

no code implementations17 Feb 2024 Pragya Srivastava, Manuj Malik, Vivek Gupta, Tanuja Ganu, Dan Roth

Large Language Models (LLMs), excel in natural language understanding, but their capability for complex mathematical reasoning with an amalgamation of structured tables and unstructured text is uncertain.

Arithmetic Reasoning Mathematical Reasoning +2

DeAL: Decoding-time Alignment for Large Language Models

no code implementations5 Feb 2024 James Y. Huang, Sailik Sengupta, Daniele Bonadiman, Yi-An Lai, Arshit Gupta, Nikolaos Pappas, Saab Mansour, Katrin Kirchhoff, Dan Roth

Current work focuses on alignment at model training time, through techniques such as Reinforcement Learning with Human Feedback (RLHF).

Code Representation Learning At Scale

no code implementations2 Feb 2024 Dejiao Zhang, Wasi Ahmad, Ming Tan, Hantian Ding, Ramesh Nallapati, Dan Roth, Xiaofei Ma, Bing Xiang

Recent studies have shown that code language models at scale demonstrate significant performance gains on downstream tasks, i. e., code generation.

Code Generation Contrastive Learning +4

Deceptive Semantic Shortcuts on Reasoning Chains: How Far Can Models Go without Hallucination?

1 code implementation16 Nov 2023 Bangzheng Li, Ben Zhou, Fei Wang, Xingyu Fu, Dan Roth, Muhao Chen

During the construction of the evidence, we purposefully replace semantic clues (entities) that may lead to the correct answer with distractor clues (evidence) that will not directly lead to the correct answer but require a chain-like reasoning process.

Hallucination Sentence

On the Calibration of Multilingual Question Answering LLMs

no code implementations15 Nov 2023 Yahan Yang, Soham Dan, Dan Roth, Insup Lee

We also conduct several ablation experiments to study the effect of language distances, language corpus size, and model size on calibration, and how multilingual models compare with their monolingual counterparts for diverse tasks and languages.

Cross-Lingual Transfer Data Augmentation +4

Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets

no code implementations15 Nov 2023 Vatsal Gupta, Pranshu Pandya, Tushar Kataria, Vivek Gupta, Dan Roth

In this study, we introduce a methodology designed to examine how input perturbations affect language models across various scales, including pre-trained models and large language models (LLMs).

Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations

1 code implementation7 Nov 2023 Sihao Chen, Hongming Zhang, Tong Chen, Ben Zhou, Wenhao Yu, Dian Yu, Baolin Peng, Hongwei Wang, Dan Roth, Dong Yu

We introduce sub-sentence encoder, a contrastively-learned contextual embedding model for fine-grained semantic representation of text.

Contrastive Learning Semantic Similarity +3

ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models via Transferable Adversarial Attacks

no code implementations19 Oct 2023 Xiaodong Yu, Hao Cheng, Xiaodong Liu, Dan Roth, Jianfeng Gao

Specifically, given the potential of data contamination (e. g., leading to memorization), good static benchmark performance does not ensure that model can reliably use the provided evidence for responding, which is essential to avoid hallucination when the required knowledge is new or private.

Hallucination Hallucination Evaluation +6

SocREval: Large Language Models with the Socratic Method for Reference-Free Reasoning Evaluation

1 code implementation29 Sep 2023 Hangfeng He, Hongming Zhang, Dan Roth

Existing reference-free reasoning evaluation metrics, while eliminating the need for human-crafted reasoning chains as references, often require fine-tuning with human-derived chains before evaluation, complicating the process and questioning their adaptability to other datasets.

ExpertQA: Expert-Curated Questions and Attributed Answers

4 code implementations14 Sep 2023 Chaitanya Malaviya, Subin Lee, Sihao Chen, Elizabeth Sieber, Mark Yatskar, Dan Roth

In this work, we conduct human evaluation of responses from a few representative systems along various axes of attribution and factuality, by bringing domain experts in the loop.

Language Modeling Language Modelling

Few-Shot Data-to-Text Generation via Unified Representation and Multi-Source Learning

no code implementations10 Aug 2023 Alexander Hanbo Li, Mingyue Shang, Evangelia Spiliopoulou, Jie Ma, Patrick Ng, Zhiguo Wang, Bonan Min, William Wang, Kathleen McKeown, Vittorio Castelli, Dan Roth, Bing Xiang

We present a novel approach for structured data-to-text generation that addresses the limitations of existing methods that primarily focus on specific types of structured data.

Data-to-Text Generation

Building Interpretable and Reliable Open Information Retriever for New Domains Overnight

no code implementations9 Aug 2023 Xiaodong Yu, Ben Zhou, Dan Roth

Information retrieval (IR) or knowledge retrieval, is a critical component for many down-stream tasks such as open-domain question answering (QA).

Information Retrieval Open-Domain Question Answering +3

On Regularization and Inference with Label Constraints

no code implementations8 Jul 2023 Kaifu Wang, Hangfeng He, Tin D. Nguyen, Piyush Kumar, Dan Roth

Prior knowledge and symbolic rules in machine learning are often expressed in the form of label constraints, especially in structured prediction problems.

Structured Prediction

The Integer Linear Programming Inference Cookbook

no code implementations30 Jun 2023 Vivek Srikumar, Dan Roth

At the end, we will see two worked examples to illustrate the use of these recipes.

Survey

Large Language Models as Sous Chefs: Revising Recipes with GPT-3

1 code implementation24 Jun 2023 Alyssa Hwang, Bryan Li, Zhaoyi Hou, Dan Roth

With their remarkably improved text generation and prompting capabilities, large language models can adapt existing written information into forms that are easier to use and understand.

Text Generation

On Learning Latent Models with Multi-Instance Weak Supervision

no code implementations NeurIPS 2023 Kaifu Wang, Efthymia Tsamoura, Dan Roth

This condition non-trivially generalizes and relaxes the existing small ambiguity degree in the PLL literature, since we allow the transition to be deterministic.

Partial Label Learning Weakly-supervised Learning

Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering

no code implementations24 May 2023 Xingyu Fu, Ben Zhou, Sihao Chen, Mark Yatskar, Dan Roth

We propose the Dynamic Clue Bottleneck Model ( (DCLUB), a method that is designed towards an inherently interpretable VQA system.

Question Answering Visual Question Answering

Taxonomy Expansion for Named Entity Recognition

no code implementations22 May 2023 Karthikeyan K, Yogarshi Vyas, Jie Ma, Giovanni Paolini, Neha Anna John, Shuai Wang, Yassine Benajiba, Vittorio Castelli, Dan Roth, Miguel Ballesteros

We experiment with 6 diverse datasets and show that PLM consistently performs better than most other approaches (0. 5 - 2. 5 F1), including in novel settings for taxonomy expansion not considered in prior work.

named-entity-recognition Named Entity Recognition +2

Open-Domain Event Graph Induction for Mitigating Framing Bias

no code implementations22 May 2023 Siyi Liu, Hongming Zhang, Hongwei Wang, Kaiqiang Song, Dan Roth, Dong Yu

However, none of the existing methods have explicitly addressed the issue of framing bias that is inherent in news articles.

Towards Corpus-Scale Discovery of Selection Biases in News Coverage: Comparing What Sources Say About Entities as a Start

no code implementations6 Apr 2023 Sihao Chen, William Bruno, Dan Roth

To facilitate research in this domain, we propose and study a conceptual framework, where we compare how sources typically mention certain controversial entities, and use such as indicators for the sources' content selection preferences.

Representation Learning

GLUECons: A Generic Benchmark for Learning Under Constraints

1 code implementation16 Feb 2023 Hossein Rajaby Faghihi, Aliakbar Nafar, Chen Zheng, Roshanak Mirzaee, Yue Zhang, Andrzej Uszok, Alexander Wan, Tanawan Premsri, Dan Roth, Parisa Kordjamshidi

Recent research has shown that integrating domain knowledge into deep learning architectures is effective -- it helps reduce the amount of required data, improves the accuracy of the models' decisions, and improves the interpretability of models.

Conversation Style Transfer using Few-Shot Learning

no code implementations16 Feb 2023 Shamik Roy, Raphael Shu, Nikolaos Pappas, Elman Mansimov, Yi Zhang, Saab Mansour, Dan Roth

Conventional text style transfer approaches focus on sentence-level style transfer without considering contextual information, and the style is described with attributes (e. g., formality).

Few-Shot Learning In-Context Learning +5

Rethinking with Retrieval: Faithful Large Language Model Inference

1 code implementation31 Dec 2022 Hangfeng He, Hongming Zhang, Dan Roth

To address this issue, we propose a novel post-processing approach, rethinking with retrieval (RR), which retrieves relevant external knowledge based on the decomposed reasoning steps obtained from the chain-of-thought (CoT) prompting.

Language Modeling Language Modelling +4

PropSegmEnt: A Large-Scale Corpus for Proposition-Level Segmentation and Entailment Recognition

no code implementations21 Dec 2022 Sihao Chen, Senaka Buthpitiya, Alex Fabrikant, Dan Roth, Tal Schuster

As these propositions can carry different truth values in the context of a given premise, we argue for the need to recognize the textual entailment relation of each proposition in a sentence individually.

Hallucination Natural Language Inference +2

CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context

1 code implementation20 Dec 2022 Yangruibo Ding, Zijian Wang, Wasi Uddin Ahmad, Murali Krishna Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, Bing Xiang

While pre-trained language models (LM) for code have achieved great success in code completion, they generate code conditioned only on the contents within the file, i. e., in-file context, but ignore the rich semantics in other files within the same project, i. e., cross-file context, a critical source of information that is especially useful in modern modular software development.

Code Completion

Generic Temporal Reasoning with Differential Analysis and Explanation

no code implementations20 Dec 2022 Yu Feng, Ben Zhou, Haoyu Wang, Helen Jin, Dan Roth

Temporal reasoning is the task of predicting temporal relations of event pairs.

In and Out-of-Domain Text Adversarial Robustness via Label Smoothing

no code implementations20 Dec 2022 Yahan Yang, Soham Dan, Dan Roth, Insup Lee

Recently it has been shown that state-of-the-art NLP models are vulnerable to adversarial attacks, where the predictions of a model can be drastically altered by slight modifications to the input (such as synonym substitutions).

Adversarial Robustness

ReCode: Robustness Evaluation of Code Generation Models

2 code implementations20 Dec 2022 Shiqi Wang, Zheng Li, Haifeng Qian, Chenghao Yang, Zijian Wang, Mingyue Shang, Varun Kumar, Samson Tan, Baishakhi Ray, Parminder Bhatia, Ramesh Nallapati, Murali Krishna Ramanathan, Dan Roth, Bing Xiang

Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation.

Code Generation HumanEval

Privacy Adhering Machine Un-learning in NLP

no code implementations19 Dec 2022 Vinayshekhar Bannihatti Kumar, Rashmi Gangadharaiah, Dan Roth

In several real world industry applications that use Machine Learning to build models on user data, such mandates require significant effort both in terms of data cleansing as well as model retraining while ensuring the models do not deteriorate in prediction quality due to removal of data.

Machine Unlearning QQP

Rethinking the Role of Scale for In-Context Learning: An Interpretability-based Case Study at 66 Billion Scale

1 code implementation18 Dec 2022 Hritik Bansal, Karthik Gopalakrishnan, Saket Dingliwal, Sravan Bodapati, Katrin Kirchhoff, Dan Roth

Using a 66 billion parameter language model (OPT-66B) across a diverse set of 14 downstream tasks, we find this is indeed the case: $\sim$70% of attention heads and $\sim$20% of feed forward networks can be removed with minimal decline in task performance.

In-Context Learning Language Modeling +2

Investigating Fairness Disparities in Peer Review: A Language Model Enhanced Approach

1 code implementation7 Nov 2022 Jiayao Zhang, Hongming Zhang, Zhun Deng, Dan Roth

We distill several insights from our analysis on study the peer review process with the help of large LMs.

Fairness Language Modeling +2

Learning to Decompose: Hypothetical Question Decomposition Based on Comparable Texts

no code implementations30 Oct 2022 Ben Zhou, Kyle Richardson, Xiaodong Yu, Dan Roth

Explicit decomposition modeling, which involves breaking down complex tasks into more straightforward and often more interpretable sub-tasks, has long been a central theme in developing robust and interpretable NLU systems.

Language Modeling Language Modelling +2

Multi-lingual Evaluation of Code Generation Models

2 code implementations26 Oct 2022 Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, Sujan Kumar Gonugondla, Hantian Ding, Varun Kumar, Nathan Fulton, Arash Farahani, Siddhartha Jain, Robert Giaquinto, Haifeng Qian, Murali Krishna Ramanathan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang

Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings.

Code Completion Code Translation +2

On the Limitations of Reference-Free Evaluations of Generated Text

no code implementations22 Oct 2022 Daniel Deutsch, Rotem Dror, Dan Roth

There is significant interest in developing evaluation metrics which accurately estimate the quality of generated text without the aid of a human-written reference text, which can be time consuming and expensive to collect or entirely unavailable in online applications.

Machine Translation

Zero-Shot On-the-Fly Event Schema Induction

no code implementations12 Oct 2022 Rotem Dror, Haoyu Wang, Dan Roth

The answers to these questions can be found by collecting many documents on the complex event of interest, extracting relevant information, and analyzing it.

CIKQA: Learning Commonsense Inference with a Unified Knowledge-in-the-loop QA Paradigm

no code implementations12 Oct 2022 Hongming Zhang, Yintong Huo, Yanai Elazar, Yangqiu Song, Yoav Goldberg, Dan Roth

We first align commonsense tasks with relevant knowledge from commonsense knowledge bases and ask humans to annotate whether the knowledge is enough or not.

Question Answering Task 2

Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis

1 code implementation12 Oct 2022 Siddharth Varia, Shuai Wang, Kishaloy Halder, Robert Vacareanu, Miguel Ballesteros, Yassine Benajiba, Neha Anna John, Rishita Anubhai, Smaranda Muresan, Dan Roth

Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task which involves four elements from user-generated texts: aspect term, aspect category, opinion term, and sentiment polarity.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3

Cross-Lingual Speaker Identification Using Distant Supervision

1 code implementation11 Oct 2022 Ben Zhou, Dian Yu, Dong Yu, Dan Roth

Speaker identification, determining which character said each utterance in literary text, benefits many downstream tasks.

Language Modeling Language Modelling +1

Are All Steps Equally Important? Benchmarking Essentiality Detection of Events

no code implementations8 Oct 2022 Haoyu Wang, Hongming Zhang, Yueguan Wang, Yuqian Deng, Muhao Chen, Dan Roth

In this paper, we address this gap by examining the extent to which current models comprehend the essentiality of step events in relation to a goal event.

Benchmarking

Unsupervised Neural Stylistic Text Generation using Transfer learning and Adapters

no code implementations7 Oct 2022 Vinayshekhar Bannihatti Kumar, Rashmi Gangadharaiah, Dan Roth

Research has shown that personality is a key driver to improve engagement and user experience in conversational systems.

Decoder Response Generation +2

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

4 code implementations9 Jun 2022 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Common Sense Reasoning Math +1

Repro: An Open-Source Library for Improving the Reproducibility and Usability of Publicly Available Research Code

1 code implementation29 Apr 2022 Daniel Deutsch, Dan Roth

We introduce Repro, an open-source library which aims at improving the reproducibility and usability of research code.

Re-Examining System-Level Correlations of Automatic Summarization Evaluation Metrics

no code implementations NAACL 2022 Daniel Deutsch, Rotem Dror, Dan Roth

How reliably an automatic summarization evaluation metric replicates human judgments of summary quality is quantified by system-level correlations.

Benchmarking Answer Verification Methods for Question Answering-Based Summarization Evaluation Metrics

no code implementations Findings (ACL) 2022 Daniel Deutsch, Dan Roth

Question answering-based summarization evaluation metrics must automatically determine whether the QA model's prediction is correct or not, a task known as answer verification.

Attribute Benchmarking +1

Label Semantic Aware Pre-training for Few-shot Text Classification

1 code implementation ACL 2022 Aaron Mueller, Jason Krone, Salvatore Romeo, Saab Mansour, Elman Mansimov, Yi Zhang, Dan Roth

Label semantic aware systems have leveraged this information for improved text classification performance during fine-tuning and prediction.

Few-Shot Text Classification Sentence +2

DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization

2 code implementations ACL 2022 Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew Arnold, Bing Xiang, Dan Roth

Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16. 5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets.

Knowledge Distillation Model Compression +2

There is a Time and Place for Reasoning Beyond the Image

1 code implementation1 Mar 2022 Xingyu Fu, Ben Zhou, Ishaan Preetam Chandratreya, Carl Vondrick, Dan Roth

For example, in Figure 1, we can find a way to identify the news articles related to the picture through segment-wise understandings of the signs, the buildings, the crowds, and more.

16k Image Clustering +1

Understanding Robust Generalization in Learning Regular Languages

no code implementations20 Feb 2022 Soham Dan, Osbert Bastani, Dan Roth

Currently, deep neural networks struggle to generalize robustly to such shifts in the data distribution.

ROCK: Causal Inference Principles for Reasoning about Commonsense Causality

1 code implementation31 Jan 2022 Jiayao Zhang, Hongming Zhang, Weijie J. Su, Dan Roth

Commonsense causality reasoning (CCR) aims at identifying plausible causes and effects in natural language descriptions that are deemed reasonable by an average person.

Causal Inference

Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval

2 code implementations28 Jan 2022 Uri Alon, Frank F. Xu, Junxian He, Sudipta Sengupta, Dan Roth, Graham Neubig

Retrieval-based language models (R-LM) model the probability of natural language text by combining a standard language model (LM) with examples retrieved from an external datastore at test time.

Language Modeling Language Modelling +1

Event Linking: Grounding Event Mentions to Wikipedia

1 code implementation15 Dec 2021 Xiaodong Yu, Wenpeng Yin, Nitish Gupta, Dan Roth

Third, we retrain and evaluate two state-of-the-art (SOTA) entity linking models, showing the challenges of event linking, and we propose an event-specific linking system EVELINK to set a competitive result for the new task.

Entity Linking Natural Language Understanding

Learning Constraints and Descriptive Segmentation for Subevent Detection

no code implementations EMNLP 2021 Haoyu Wang, Hongming Zhang, Muhao Chen, Dan Roth

The task of subevent detection aims to resolve this granularity issue, recognizing the membership of multi-granular events in event complexes.

Descriptive Text Segmentation

What is Your Article Based On? Inferring Fine-grained Provenance

no code implementations ACL 2021 Yi Zhang, Zachary Ives, Dan Roth

We experiment with a newly created evaluation dataset, Politi-Prov, based on fact-checking articles from \url{www. politifact. com}; our experimental results show that our solution leads to a significant improvement over baselines.

Fact Checking Sentence

Zero-shot Event Extraction via Transfer Learning: Challenges and Insights

no code implementations ACL 2021 Qing Lyu, Hongming Zhang, Elior Sulem, Dan Roth

Event extraction has long been a challenging task, addressed mostly with supervised methods that require expensive annotation and are not extensible to new event ontologies.

Natural Language Inference Question Answering +2

Event-Centric Natural Language Processing

no code implementations ACL 2021 Muhao Chen, Hongming Zhang, Qiang Ning, Manling Li, Heng Ji, Kathleen McKeown, Dan Roth

This tutorial targets researchers and practitioners who are interested in AI technologies that help machines understand natural language text, particularly real-world events described in the text.

MultiOpEd: A Corpus of Multi-Perspective News Editorials

1 code implementation NAACL 2021 Siyi Liu, Sihao Chen, Xander Uyttendaele, Dan Roth

We propose MultiOpEd, an open-domain news editorial corpus that supports various tasks pertaining to the argumentation structure in news editorials, focusing on automatic perspective discovery.

Multi-Task Learning Sentence

Event Time Extraction and Propagation via Graph Attention Networks

1 code implementation NAACL 2021 Haoyang Wen, Yanru Qu, Heng Ji, Qiang Ning, Jiawei Han, Avi Sil, Hanghang Tong, Dan Roth

Grounding events into a precise timeline is important for natural language understanding but has received limited attention in recent work.

Graph Attention Natural Language Understanding +3

Learning to Decompose and Organize Complex Tasks

1 code implementation NAACL 2021 Yi Zhang, Sujay Kumar Jauhar, Julia Kiseleva, Ryen White, Dan Roth

Both components of our graph induction solution are evaluated in experiments, demonstrating that our models outperform a state-of-the-art text generator significantly.

Management

Generalization in Instruction Following Systems

no code implementations NAACL 2021 Soham Dan, Michael Zhou, Dan Roth

Understanding and executing natural language instructions in a grounded domain is one of the hallmarks of artificial intelligence.

Data Augmentation Instruction Following

Weighted Training for Cross-Task Learning

1 code implementation ICLR 2022 Shuxiao Chen, Koby Crammer, Hangfeng He, Dan Roth, Weijie J. Su

In this paper, we introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning based on minimizing a representation-based task distance between the source and target tasks.

Chunking named-entity-recognition +6

Toward Code Generation: A Survey and Lessons from Semantic Parsing

no code implementations26 Apr 2021 Celine Lee, Justin Gottschlich, Dan Roth

With the growth of natural language processing techniques and demand for improved software engineering efficiency, there is an emerging interest in translating intention from human languages to programming languages.

Code Generation Program Synthesis +2

Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection

no code implementations NAACL 2021 Sihao Chen, Fan Zhang, Kazoo Sone, Dan Roth

Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context.

Abstractive Text Summarization Hallucination