Search Results for author: Eduard Hovy

Found 176 papers, 78 papers with code

What a Nasty day: Exploring Mood-Weather Relationship from Twitter

no code implementations30 Oct 2014 Jiwei Li, Xun Wang, Eduard Hovy

While it has long been believed in psychology that weather somehow influences human's mood, the debates have been going on for decades about how they are correlated.

Retrofitting Word Vectors to Semantic Lexicons

2 code implementations HLT 2015 Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, Noah A. Smith

Vector space word representations are learned from distributional information of words in large corpora.

The NLP Engine: A Universal Turing Machine for NLP

no code implementations28 Feb 2015 Jiwei Li, Eduard Hovy

It is commonly accepted that machine translation is a more complex task than part of speech tagging.

Machine Translation Part-Of-Speech Tagging +1

Visualizing and Understanding Neural Models in NLP

1 code implementation NAACL 2016 Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky

While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret.

Negation Sentence

Reflections on Sentiment/Opinion Analysis

no code implementations6 Jul 2015 Jiwei Li, Eduard Hovy

In this paper, we described possible directions for deeper understanding, helping bridge the gap between psychology / cognitive science and computational approaches in sentiment/opinion analysis literature.

TabMCQ: A Dataset of General Knowledge Tables and Multiple-choice Questions

no code implementations12 Feb 2016 Sujay Kumar Jauhar, Peter Turney, Eduard Hovy

We describe two new related resources that facilitate modelling of general knowledge reasoning in 4th grade science exams.

General Knowledge Multiple-choice +1

End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF

25 code implementations ACL 2016 Xuezhe Ma, Eduard Hovy

State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing.

Feature Engineering Named Entity Recognition +3

Unsupervised Ranking Model for Entity Coreference Resolution

no code implementations NAACL 2016 Xuezhe Ma, Zhengzhong Liu, Eduard Hovy

Coreference resolution is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community.

coreference-resolution

Harnessing Deep Neural Networks with Logic Rules

2 code implementations ACL 2016 Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, Eric Xing

Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models.

named-entity-recognition Named Entity Recognition +2

Dropout with Expectation-linear Regularization

no code implementations26 Sep 2016 Xuezhe Ma, Yingkai Gao, Zhiting Hu, Yao-Liang Yu, Yuntian Deng, Eduard Hovy

Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an \emph{explicit} control of the gap.

Image Classification

Neural Probabilistic Model for Non-projective MST Parsing

no code implementations IJCNLP 2017 Xuezhe Ma, Eduard Hovy

In this paper, we propose a probabilistic parsing model, which defines a proper conditional probability distribution over non-projective dependency trees for a given sentence, using neural representations as inputs.

Sentence

Calibrating Energy-based Generative Adversarial Networks

1 code implementation6 Feb 2017 Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, Aaron Courville

In this paper, we propose to equip Generative Adversarial Networks with the ability to produce direct energy estimates for samples. Specifically, we propose a flexible adversarial training framework, and prove this framework not only ensures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimal.

Ranked #17 on Conditional Image Generation on CIFAR-10 (Inception score metric)

Image Generation

An Interpretable Knowledge Transfer Model for Knowledge Base Completion

no code implementations ACL 2017 Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy

Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness.

Knowledge Base Completion Transfer Learning

Ontology-Aware Token Embeddings for Prepositional Phrase Attachment

1 code implementation ACL 2017 Pradeep Dasigi, Waleed Ammar, Chris Dyer, Eduard Hovy

Type-level word embeddings use the same set of parameters to represent all instances of a word regardless of its context, ignoring the inherent lexical ambiguity in language.

Prepositional Phrase Attachment Word Embeddings

Softmax Q-Distribution Estimation for Structured Prediction: A Theoretical Interpretation for RAML

no code implementations ICLR 2018 Xuezhe Ma, Pengcheng Yin, Jingzhou Liu, Graham Neubig, Eduard Hovy

Reward augmented maximum likelihood (RAML), a simple and effective learning framework to directly optimize towards the reward function in structured prediction tasks, has led to a number of impressive empirical successes.

Dependency Parsing Image Captioning +6

Controllable Invariance through Adversarial Feature Learning

no code implementations NeurIPS 2017 Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, Graham Neubig

Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning.

General Classification Image Classification +1

Shakespearizing Modern Language Using Copy-Enriched Sequence-to-Sequence Models

2 code implementations4 Jul 2017 Harsh Jhamtani, Varun Gangal, Eduard Hovy, Eric Nyberg

Variations in writing styles are commonly used to adapt the content to a specific context, audience, or purpose.

Detecting and Explaining Causes From Text For a Time Series Event

1 code implementation EMNLP 2017 Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, Eduard Hovy

Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.

Time Series Time Series Analysis

Finding Structure in Figurative Language: Metaphor Detection with Topic-based Frames

no code implementations WS 2017 Hyeju Jang, Keith Maki, Eduard Hovy, Carolyn Ros{\'e}

In this paper, we present a novel and highly effective method for induction and application of metaphor frame templates as a step toward detecting metaphor in extended discourse.

BIG-bench Machine Learning Machine Translation

Event Detection Using Frame-Semantic Parser

no code implementations WS 2017 Evangelia Spiliopoulou, Eduard Hovy, Teruko Mitamura

Recent methods for Event Detection focus on Deep Learning for automatic feature generation and feature ranking.

Event Detection General Classification +1

Identifying Semantic Edit Intentions from Revisions in Wikipedia

no code implementations EMNLP 2017 Diyi Yang, Aaron Halfaker, Robert Kraut, Eduard Hovy

Most studies on human editing focus merely on syntactic revision operations, failing to capture the intentions behind revision changes, which are essential for facilitating the single and collaborative writing process.

Information Retrieval Lexical Simplification +2

Shakespearizing Modern Language Using Copy-Enriched Sequence to Sequence Models

1 code implementation WS 2017 Harsh Jhamtani, Varun Gangal, Eduard Hovy, Eric Nyberg

Variations in writing styles are commonly used to adapt the content to a specific context, audience, or purpose.

Huntsville, hospitals, and hockey teams: Names can reveal your location

no code implementations WS 2017 Bahar Salehi, Dirk Hovy, Eduard Hovy, Anders S{\o}gaard

Geolocation is the task of identifying a social media user{'}s primary location, and in natural language processing, there is a growing literature on to what extent automated analysis of social media posts can help.

Knowledge Base Population Recommendation Systems +1

STCP: Simplified-Traditional Chinese Conversion and Proofreading

no code implementations IJCNLP 2017 Jiarui Xu, Xuezhe Ma, Chen-Tse Tsai, Eduard Hovy

This paper aims to provide an effective tool for conversion between Simplified Chinese and Traditional Chinese.

SPINE: SParse Interpretable Neural Embeddings

2 code implementations23 Nov 2017 Anant Subramanian, Danish Pruthi, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Eduard Hovy

We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec.

Denoising Word Embeddings

Large-scale Cloze Test Dataset Designed by Teachers

no code implementations ICLR 2018 Qizhe Xie, Guokun Lai, Zihang Dai, Eduard Hovy

Cloze test is widely adopted in language exams to evaluate students' language proficiency.

Cloze Test

A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications

1 code implementation NAACL 2018 Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine van Zuylen, Sebastian Kohlmeier, Eduard Hovy, Roy Schwartz

In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline.

From Credit Assignment to Entropy Regularization: Two New Algorithms for Neural Sequence Prediction

1 code implementation ACL 2018 Zihang Dai, Qizhe Xie, Eduard Hovy

In this work, we study the credit assignment problem in reward augmented maximum likelihood (RAML) learning, and establish a theoretical equivalence between the token-level counterpart of RAML and the entropy regularized reinforcement learning.

reinforcement-learning Reinforcement Learning (RL)

Stack-Pointer Networks for Dependency Parsing

3 code implementations ACL 2018 Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, Eduard Hovy

Combining pointer networks~\citep{vinyals2015pointer} with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion.

Dependency Parsing Sentence

AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples

1 code implementation ACL 2018 Dongyeop Kang, Tushar Khot, Ashish Sabharwal, Eduard Hovy

We consider the problem of learning textual entailment models with limited supervision (5K-10K training examples), and present two complementary approaches for it.

Natural Language Inference Negation

Automatic Event Salience Identification

1 code implementation EMNLP 2018 Zhengzhong Liu, Chenyan Xiong, Teruko Mitamura, Eduard Hovy

Our analyses demonstrate that our neural model captures interesting connections between salience and discourse unit relations (e. g., scripts and frame structures).

Measuring Density and Similarity of Task Relevant Information in Neural Representations

no code implementations27 Sep 2018 Danish Pruthi, Mansi Gupta, Nitish Kumar Kulkarni, Graham Neubig, Eduard Hovy

Neural models achieve state-of-the-art performance due to their ability to extract salient features useful to downstream tasks.

Sentence Transfer Learning

The Profiling Machine: Active Generalization over Knowledge

no code implementations1 Oct 2018 Filip Ilievski, Eduard Hovy, Qizhe Xie, Piek Vossen

The human mind is a powerful multifunctional knowledge storage and management system that performs generalization, type inference, anomaly detection, stereotyping, and other tasks.

Anomaly Detection Management

Neural Machine Translation with Adequacy-Oriented Learning

no code implementations21 Nov 2018 Xiang Kong, Zhaopeng Tu, Shuming Shi, Eduard Hovy, Tong Zhang

Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation.

Attribute Machine Translation +3

MAE: Mutual Posterior-Divergence Regularization for Variational AutoEncoders

no code implementations ICLR 2019 Xuezhe Ma, Chunting Zhou, Eduard Hovy

Variational Autoencoder (VAE), a simple and effective deep generative model, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations.

Density Estimation Image Generation +1

An Adversarial Approach to High-Quality, Sentiment-Controlled Neural Dialogue Generation

no code implementations22 Jan 2019 Xiang Kong, Bohan Li, Graham Neubig, Eduard Hovy, Yiming Yang

In this work, we propose a method for neural dialogue response generation that allows not only generating semantically reasonable responses according to the dialogue history, but also explicitly controlling the sentiment of the response via sentiment labels.

Dialogue Generation Response Generation +1

MaCow: Masked Convolutional Generative Flow

2 code implementations NeurIPS 2019 Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy

Flow-based generative models, conceptually attractive due to tractability of both the exact log-likelihood computation and latent-variable inference, and efficiency of both training and sampling, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations.

Computational Efficiency Density Estimation +1

The ARIEL-CMU Systems for LoReHLT18

no code implementations24 Feb 2019 Aditi Chaudhary, Siddharth Dalmia, Junjie Hu, Xinjian Li, Austin Matthews, Aldrian Obaja Muis, Naoki Otani, Shruti Rijhwani, Zaid Sheikh, Nidhi Vyas, Xinyi Wang, Jiateng Xie, Ruochen Xu, Chunting Zhou, Peter J. Jansen, Yiming Yang, Lori Levin, Florian Metze, Teruko Mitamura, David R. Mortensen, Graham Neubig, Eduard Hovy, Alan W. black, Jaime Carbonell, Graham V. Horwood, Shabnam Tafreshi, Mona Diab, Efsun S. Kayi, Noura Farra, Kathleen McKeown

This paper describes the ARIEL-CMU submissions to the Low Resource Human Language Technologies (LoReHLT) 2018 evaluations for the tasks Machine Translation (MT), Entity Discovery and Linking (EDL), and detection of Situation Frames in Text and Speech (SF Text and Speech).

Machine Translation Translation

Unsupervised Data Augmentation for Consistency Training

20 code implementations NeurIPS 2020 Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, Quoc V. Le

In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.

Image Augmentation Semi-Supervised Image Classification +2

Let's Make Your Request More Persuasive: Modeling Persuasive Strategies via Semi-Supervised Neural Nets on Crowdfunding Platforms

no code implementations NAACL 2019 Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, Eduard Hovy

Modeling what makes a request persuasive - eliciting the desired response from a reader - is critical to the study of propaganda, behavioral economics, and advertising.

Persuasiveness Sentence

Iterative Search for Weakly Supervised Semantic Parsing

no code implementations NAACL 2019 Pradeep Dasigi, Matt Gardner, Shikhar Murty, Luke Zettlemoyer, Eduard Hovy

Training semantic parsers from question-answer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer.

Semantic Parsing Visual Reasoning

An Empirical Investigation of Structured Output Modeling for Graph-based Neural Dependency Parsing

1 code implementation ACL 2019 Zhisong Zhang, Xuezhe Ma, Eduard Hovy

In this paper, we investigate the aspect of structured output modeling for the state-of-the-art graph-based neural dependency parser (Dozat and Manning, 2017).

Dependency Parsing Sentence

Exploring Numeracy in Word Embeddings

no code implementations ACL 2019 Aakanksha Naik, Ravich, Abhilasha er, Carolyn Rose, Eduard Hovy

In this work, we show that existing embedding models are inadequate at constructing representations that capture salient aspects of mathematical meaning for numbers, which is important for language understanding.

Word Embeddings

Toward Comprehensive Understanding of a Sentiment Based on Human Motives

1 code implementation ACL 2019 Naoki Otani, Eduard Hovy

In sentiment detection, the natural language processing community has focused on determining holders, facets, and valences, but has paid little attention to the reasons for sentiment decisions.

Transfer Learning

A Cascade Model for Proposition Extraction in Argumentation

no code implementations WS 2019 Yohan Jo, Jacky Visser, Chris Reed, Eduard Hovy

Propositions are the basic units of an argument and the primary building blocks of most argument mining systems.

Argument Mining Segmentation +1

Earlier Isn't Always Better: Sub-aspect Analysis on Corpus and System Biases in Summarization

1 code implementation IJCNLP 2019 Taehee Jung, Dongyeop Kang, Lucas Mentch, Eduard Hovy

We find that while position exhibits substantial bias in news articles, this is not the case, for example, with academic papers and meeting minutes.

News Summarization Position

Linguistic Versus Latent Relations for Modeling Coherent Flow in Paragraphs

1 code implementation IJCNLP 2019 Dongyeop Kang, Hiroaki Hayashi, Alan W. black, Eduard Hovy

In order to produce a coherent flow of text, we explore two forms of intersentential relations in a paragraph: one is a human-created linguistical relation that forms a structure (e. g., discourse tree) and the other is a relation from latent representation learned from the sentences themselves.

Language Modelling Relation

Nested Named Entity Recognition via Second-best Sequence Learning and Decoding

3 code implementations5 Sep 2019 Takashi Shibuya, Eduard Hovy

When an entity name contains other names within it, the identification of all combinations of names can become difficult and expensive.

named-entity-recognition Named Entity Recognition +3

Definition Frames: Using Definitions for Hybrid Concept Representations

1 code implementation COLING 2020 Evangelia Spiliopoulou, Artidoro Pagnoni, Eduard Hovy

Advances in word representations have shown tremendous improvements in downstream NLP tasks, but lack semantic interpretability.

Relation Extraction Word Embeddings +1

Learning the Difference that Makes a Difference with Counterfactually-Augmented Data

2 code implementations ICLR 2020 Divyansh Kaushik, Eduard Hovy, Zachary C. Lipton

While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e. g., mentions of genre), models trained on the combined data are less sensitive to this signal.

counterfactual Data Augmentation +2

Style is NOT a single variable: Case Studies for Cross-Style Language Understanding

2 code implementations9 Nov 2019 Dongyeop Kang, Eduard Hovy

This paper provides the benchmark corpus (xSLUE) that combines existing datasets and collects a new one for sentence-level cross-style language understanding and evaluation.

Sentence valid

Decompressing Knowledge Graph Representations for Link Prediction

1 code implementation11 Nov 2019 Xiang Kong, Xianyang Chen, Eduard Hovy

Specifically, embeddings of entities and relationships are first decompressed to a more expressive and robust space by decompressing functions, then knowledge graph embedding models are trained in this new feature space.

Knowledge Graph Embedding Knowledge Graphs +1

Self-training with Noisy Student improves ImageNet classification

12 code implementations CVPR 2020 Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V. Le

During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher.

Ranked #16 on Image Classification on ImageNet ReaL (using extra training data)

Data Augmentation General Classification +1

Decoupling Global and Local Representations via Invertible Generative Flows

1 code implementation ICLR 2021 Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy

In this work, we propose a new generative model that is capable of automatically decoupling global and local representations of images in an entirely unsupervised setting, by embedding a generative flow in the VAE framework to model the decoder.

Density Estimation Image Generation +2

Machine-Aided Annotation for Fine-Grained Proposition Types in Argumentation

no code implementations LREC 2020 Yohan Jo, Elijah Mayfield, Chris Reed, Eduard Hovy

We introduce a corpus of the 2016 U. S. presidential debates and commentary, containing 4, 648 argumentative propositions annotated with fine-grained proposition types.

BIG-bench Machine Learning

Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?

no code implementations EACL 2021 Abhilasha Ravichander, Yonatan Belinkov, Eduard Hovy

Although neural models have achieved impressive results on several NLP benchmarks, little is understood about the mechanisms they use to perform language tasks.

Natural Language Inference Sentence +1

Measuring Forecasting Skill from Text

1 code implementation ACL 2020 Shi Zong, Alan Ritter, Eduard Hovy

We present a number of linguistic metrics which are computed over text associated with people's predictions about the future including: uncertainty, readability, and emotion.

A Two-Step Approach for Implicit Event Argument Detection

no code implementations ACL 2020 Zhisong Zhang, Xiang Kong, Zhengzhong Liu, Xuezhe Ma, Eduard Hovy

It remains a challenge to detect implicit arguments, calling for more future work of document-level modeling for this task.

Sentence Vocal Bursts Valence Prediction

Explaining The Efficacy of Counterfactually Augmented Data

no code implementations ICLR 2021 Divyansh Kaushik, Amrith Setlur, Eduard Hovy, Zachary C. Lipton

In attempts to produce ML models less reliant on spurious patterns in NLP datasets, researchers have recently proposed curating counterfactually augmented data (CAD) via a human-in-the-loop process in which given some documents and their (initial) labels, humans must revise the text to make a counterfactual label applicable.

counterfactual Domain Generalization

Detecting Attackable Sentences in Arguments

1 code implementation EMNLP 2020 Yohan Jo, Seojin Bang, Emaad Manzoor, Eduard Hovy, Chris Reed

Finding attackable sentences in an argument is the first step toward successful refutation in argumentation.

BIG-bench Machine Learning Sentence

Extracting Implicitly Asserted Propositions in Argumentation

1 code implementation EMNLP 2020 Yohan Jo, Jacky Visser, Chris Reed, Eduard Hovy

Our study may inform future research on argument mining and the semantics of these rhetorical devices in argumentation.

Argument Mining

BERTering RAMS: What and How Much does BERT Already Know About Event Arguments? -- A Study on the RAMS Dataset

no code implementations8 Oct 2020 Varun Gangal, Eduard Hovy

Next, we find that linear combinations of these heads, estimated with approx 11% of available total event argument detection supervision, can push performance well-higher for some roles - highest two being Victim (68. 29% Accuracy) and Artifact(58. 82% Accuracy).

Sentence

Plan ahead: Self-Supervised Text Planning for Paragraph Completion Task

no code implementations EMNLP 2020 Dongyeop Kang, Eduard Hovy

To address that, we propose a self-supervised text planner SSPlanner that predicts what to say first (content prediction), then guides the pretrained language model (surface realization) using the predicted content.

Language Modelling Sentence

Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability

no code implementations14 Oct 2020 Yuxian Meng, Chun Fan, Zijun Sun, Eduard Hovy, Fei Wu, Jiwei Li

Any prediction from a model is made by a combination of learning history and test stimuli.

A Dataset for Tracking Entities in Open Domain Procedural Text

no code implementations EMNLP 2020 Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi Mishra, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, Eduard Hovy

Our solution is a new task formulation where given just a procedural text as input, the task is to generate a set of state change tuples(entity, at-tribute, before-state, after-state)for each step, where the entity, attribute, and state values must be predicted from an open vocabulary.

Attribute

Event-Related Bias Removal for Real-time Disaster Events

no code implementations Findings of the Association for Computational Linguistics 2020 Evangelia Spiliopoulou, Salvador Medina Maza, Eduard Hovy, Alexander Hauptmann

Furthermore, the classification of information in real-time systems requires training on out-of-domain data, as we do not have any data from a new emerging crisis.

General Classification

Incorporating a Local Translation Mechanism into Non-autoregressive Translation

1 code implementation EMNLP 2020 Xiang Kong, Zhisong Zhang, Eduard Hovy

In this work, we introduce a novel local autoregressive translation (LAT) mechanism into non-autoregressive translation (NAT) models so as to capture local dependencies among tar-get outputs.

Machine Translation Position +2

Exploring Neural Entity Representations for Semantic Information

1 code implementation EMNLP (BlackboxNLP) 2020 Andrew Runge, Eduard Hovy

Neural methods for embedding entities are typically extrinsically evaluated on downstream tasks and, more recently, intrinsically using probing tasks.

Entity Linking

On the Systematicity of Probing Contextualized Word Representations: The Case of Hypernymy in BERT

1 code implementation Joint Conference on Lexical and Computational Semantics 2020 Abhilasha Ravichander, Eduard Hovy, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

In particular, we demonstrate through a simple consistency probe that the ability to correctly retrieve hypernyms in cloze tasks, as used in prior work, does not correspond to systematic knowledge in BERT.

Measuring and Improving Consistency in Pretrained Language Models

1 code implementation1 Feb 2021 Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg

In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge?

NoiseQA: Challenge Set Evaluation for User-Centric Question Answering

2 code implementations EACL 2021 Abhilasha Ravichander, Siddharth Dalmia, Maria Ryskina, Florian Metze, Eduard Hovy, Alan W Black

When Question-Answering (QA) systems are deployed in the real world, users query them through a variety of interfaces, such as speaking to voice assistants, typing questions into a search engine, or even translating questions to languages supported by the QA system.

Question Answering

StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer

2 code implementations NAACL 2021 Yiwei Lyu, Paul Pu Liang, Hai Pham, Eduard Hovy, Barnabás Póczos, Ruslan Salakhutdinov, Louis-Philippe Morency

Many of the existing style transfer benchmarks primarily focus on individual high-level semantic changes (e. g. positive to negative), which enable controllability at a high level but do not offer fine-grained control involving sentence structure, emphasis, and content of the sentence.

Benchmarking Sentence +2

NAREOR: The Narrative Reordering Problem

1 code implementation14 Apr 2021 Varun Gangal, Steven Y. Feng, Malihe Alikhani, Teruko Mitamura, Eduard Hovy

In this paper, we propose and investigate the task of Narrative Reordering (NAREOR) which involves rewriting a given story in a different narrative order while preserving its plot.

A Survey of Data Augmentation Approaches for NLP

1 code implementation Findings (ACL) 2021 Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard Hovy

In this paper, we present a comprehensive and unifying survey of data augmentation for NLP by summarizing the literature in a structured manner.

Data Augmentation

Classifying Argumentative Relations Using Logical Mechanisms and Argumentation Schemes

1 code implementation17 May 2021 Yohan Jo, Seojin Bang, Chris Reed, Eduard Hovy

While argument mining has achieved significant success in classifying argumentative relations between statements (support, attack, and neutral), we have a limited computational understanding of logical mechanisms that constitute those relations.

Argument Mining Relation +1

Comparative Error Analysis in Neural and Finite-state Models for Unsupervised Character-level Transduction

no code implementations ACL (SIGMORPHON) 2021 Maria Ryskina, Eduard Hovy, Taylor Berg-Kirkpatrick, Matthew R. Gormley

Traditionally, character-level transduction problems have been solved with finite-state models designed to encode structural and linguistic knowledge of the underlying process, whereas recent approaches rely on the power and flexibility of sequence-to-sequence models with attention.

Style is NOT a single variable: Case Studies for Cross-Stylistic Language Understanding

1 code implementation ACL 2021 Dongyeop Kang, Eduard Hovy

This paper provides the benchmark corpus (XSLUE) that combines existing datasets and collects a new one for sentence-level cross-style language understanding and evaluation.

Sentence valid

Dual Graph Convolutional Networks for Aspect-based Sentiment Analysis

1 code implementation ACL 2021 Ruifan Li, Hao Chen, Fangxiang Feng, Zhanyu Ma, Xiaojie Wang, Eduard Hovy

To overcome these challenges, in this paper, we propose a dual graph convolutional networks (DualGCN) model that considers the complementarity of syntax structures and semantic correlations simultaneously.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2

SAPPHIRE: Approaches for Enhanced Concept-to-Text Generation

1 code implementation INLG (ACL) 2021 Steven Y. Feng, Jessica Huynh, Chaitanya Narisetty, Eduard Hovy, Varun Gangal

We motivate and propose a suite of simple but effective improvements for concept-to-text generation called SAPPHIRE: Set Augmentation and Post-hoc PHrase Infilling and REcombination.

Concept-To-Text Generation Specificity

Interpreting Deep Learning Models in Natural Language Processing: A Review

no code implementations20 Oct 2021 Xiaofei Sun, Diyi Yang, Xiaoya Li, Tianwei Zhang, Yuxian Meng, Han Qiu, Guoyin Wang, Eduard Hovy, Jiwei Li

Neural network models have achieved state-of-the-art performances in a wide range of natural language processing (NLP) tasks.

Think about it! Improving defeasible reasoning by first modeling the question scenario

1 code implementation24 Oct 2021 Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Peter Clark, Yiming Yang, Eduard Hovy

Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence.

Template Filling for Controllable Commonsense Reasoning

no code implementations31 Oct 2021 Dheeraj Rajagopal, Vivek Khetan, Bogdan Sacaleanu, Anatole Gershman, Andrew Fano, Eduard Hovy

To enable better controllability, we propose to study the commonsense reasoning as a template filling task (TemplateCSR) -- where the language models fills reasoning templates with the given constraints as control factors.

Multiple-choice

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

2 code implementations6 Dec 2021 Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang

Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.

Data Augmentation

PANCETTA: Phoneme Aware Neural Completion to Elicit Tongue Twisters Automatically

no code implementations13 Sep 2022 Sedrick Scott Keh, Steven Y. Feng, Varun Gangal, Malihe Alikhani, Eduard Hovy

Through automatic and human evaluation, as well as qualitative analysis, we show that PANCETTA generates novel, phonetically difficult, fluent, and semantically meaningful tongue twisters.

CHARD: Clinical Health-Aware Reasoning Across Dimensions for Text Generation Models

1 code implementation9 Oct 2022 Steven Y. Feng, Vivek Khetan, Bogdan Sacaleanu, Anatole Gershman, Eduard Hovy

We motivate and introduce CHARD: Clinical Health-Aware Reasoning across Dimensions, to investigate the capability of text generation models to act as implicit clinical knowledge bases and generate free-flow textual explanations about various health-related conditions across several dimensions.

Clinical Knowledge Data Augmentation +1

A Survey of Active Learning for Natural Language Processing

1 code implementation18 Oct 2022 Zhisong Zhang, Emma Strubell, Eduard Hovy

In this work, we provide a survey of active learning (AL) for its applications in natural language processing (NLP).

Active Learning Structured Prediction

Data-efficient Active Learning for Structured Prediction with Partial Annotation and Self-Training

1 code implementation22 May 2023 Zhisong Zhang, Emma Strubell, Eduard Hovy

To address this challenge, we adopt an error estimator to adaptively decide the partial selection ratio according to the current model's capability.

Active Learning Structured Prediction

Sim-GPT: Text Similarity via GPT Annotated Data

1 code implementation9 Dec 2023 Shuhe Wang, Beiming Cao, Shengyu Zhang, Xiaoya Li, Jiwei Li, Fei Wu, Guoyin Wang, Eduard Hovy

Due to the lack of a large collection of high-quality labeled sentence pairs with textual similarity scores, existing approaches for Semantic Textual Similarity (STS) mostly rely on unsupervised techniques or training signals that are only partially correlated with textual similarity, e. g., NLI-based datasets.

Semantic Textual Similarity Sentence +2

Exploring Multi-Document Information Consolidation for Scientific Sentiment Summarization

no code implementations28 Feb 2024 Miao Li, Jey Han Lau, Eduard Hovy

Modern natural language generation systems with LLMs exhibit the capability to generate a plausible summary of multiple documents; however, it is uncertain if models truly possess the ability of information consolidation to generate summaries, especially on those source documents with opinionated information.

Review Generation Text Generation

Overview of the First Workshop on Scholarly Document Processing (SDP)

no code implementations EMNLP (sdp) 2020 Muthu Kumar Chandrasekaran, Guy Feigenblat, Dayne Freitag, Tirthankar Ghosal, Eduard Hovy, Philipp Mayr, Michal Shmueli-Scheuer, Anita de Waard

To reach to the broader NLP and AI/ML community, pool distributed efforts and enable shared access to published research, we held the 1st Workshop on Scholarly Document Processing at EMNLP 2020 as a virtual event.

Comparing Span Extraction Methods for Semantic Role Labeling

1 code implementation ACL (spnlp) 2021 Zhisong Zhang, Emma Strubell, Eduard Hovy

In this work, we empirically compare span extraction methods for the task of semantic role labeling (SRL).

Semantic Role Labeling

On the Benefit of Syntactic Supervision for Cross-lingual Transfer in Semantic Role Labeling

1 code implementation EMNLP 2021 Zhisong Zhang, Emma Strubell, Eduard Hovy

Although recent developments in neural architectures and pre-trained representations have greatly increased state-of-the-art model performance on fully-supervised semantic role labeling (SRL), the task remains challenging for languages where supervised SRL training data are not abundant.

Cross-Lingual Transfer Semantic Role Labeling

Think about it! Improving defeasible reasoning by first modeling the question scenario.

1 code implementation EMNLP 2021 Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Peter Clark, Yiming Yang, Eduard Hovy

Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence.

Cannot find the paper you are looking for? You can Submit a new open access paper.