no code implementations • EMNLP (sdp) 2020 • Muthu Kumar Chandrasekaran, Guy Feigenblat, Eduard Hovy, Abhilasha Ravichander, Michal Shmueli-Scheuer, Anita de Waard
We present the results of three Shared Tasks held at the Scholarly Document Processing Workshop at EMNLP2020: CL-SciSumm, LaySumm and LongSumm.
no code implementations • EMNLP (sdp) 2020 • Muthu Kumar Chandrasekaran, Guy Feigenblat, Dayne Freitag, Tirthankar Ghosal, Eduard Hovy, Philipp Mayr, Michal Shmueli-Scheuer, Anita de Waard
To reach to the broader NLP and AI/ML community, pool distributed efforts and enable shared access to published research, we held the 1st Workshop on Scholarly Document Processing at EMNLP 2020 as a virtual event.
1 code implementation • EMNLP 2021 • Zhisong Zhang, Emma Strubell, Eduard Hovy
Although recent developments in neural architectures and pre-trained representations have greatly increased state-of-the-art model performance on fully-supervised semantic role labeling (SRL), the task remains challenging for languages where supervised SRL training data are not abundant.
1 code implementation • ACL (spnlp) 2021 • Zhisong Zhang, Emma Strubell, Eduard Hovy
In this work, we empirically compare span extraction methods for the task of semantic role labeling (SRL).
no code implementations • EMNLP (BlackboxNLP) 2020 • Varun Gangal, Eduard Hovy
Next, we find that linear combinations of these heads, estimated with approx.
1 code implementation • EMNLP 2021 • Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Peter Clark, Yiming Yang, Eduard Hovy
Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence.
no code implementations • LREC 2022 • Dheeraj Rajagopal, Xuchao Zhang, Michael Gamon, Sujay Kumar Jauhar, Diyi Yang, Eduard Hovy
Document authoring involves a lengthy revision process, marked by individual edits that are frequently linked to comments.
1 code implementation • 10 Mar 2025 • Shuhe Wang, Xiaoya Li, Jiwei Li, Guoyin Wang, Xiaofei Sun, Bob Zhu, Han Qiu, Mo Yu, Shengjie Shen, Eduard Hovy
However, none of these datasets are publicly available, which restricts transparency and hinders further advancements in the field.
1 code implementation • 21 Feb 2025 • Yilin Geng, Haonan Li, Honglin Mu, Xudong Han, Timothy Baldwin, Omri Abend, Eduard Hovy, Lea Frermann
Large language models (LLMs) are increasingly deployed with hierarchical instruction schemes, where certain instructions (e. g., system-level directives) are expected to take precedence over others (e. g., user messages).
no code implementations • 20 Feb 2025 • Zhiwei Liu, Kailai Yang, Eduard Hovy, Sophia Ananiadou
The widespread dissemination of rumors on social media has a significant impact on people's lives, potentially leading to public panic and fear.
no code implementations • 27 Jan 2025 • Miao Li, Jey Han Lau, Eduard Hovy, Mirella Lapata
Opinion summarization plays a key role in deriving meaningful insights from large-scale online reviews.
1 code implementation • 26 Jan 2025 • Shuhe Wang, Xiaoya Li, Xiaofei Sun, Guoyin Wang, Tianwei Zhang, Jiwei Li, Eduard Hovy
Existing face identity (FaceID) customization methods perform well but are limited to generating identical faces as the input, while in real-world applications, users often desire images of the same person but with variations, such as different expressions (e. g., smiling, angry) or angles (e. g., side profile).
no code implementations • 24 Dec 2024 • Haonan Li, Xudong Han, Zenan Zhai, Honglin Mu, Hao Wang, Zhenxuan Zhang, Yilin Geng, Shom Lin, Renxi Wang, Artem Shelmanov, Xiangyu Qi, Yuxia Wang, Donghai Hong, Youliang Yuan, Meng Chen, Haoqin Tu, Fajri Koto, Tatsuki Kuribayashi, Cong Zeng, Rishabh Bhardwaj, Bingchen Zhao, Yawen Duan, Yi Liu, Emad A. Alghamdi, Yaodong Yang, Yinpeng Dong, Soujanya Poria, PengFei Liu, Zhengzhong Liu, Xuguang Ren, Eduard Hovy, Iryna Gurevych, Preslav Nakov, Monojit Choudhury, Timothy Baldwin
To address this gap, we introduce Libra-Leaderboard, a comprehensive framework designed to rank LLMs through a balanced evaluation of performance and safety.
1 code implementation • 5 Dec 2024 • Shuhe Wang, Shengyu Zhang, Jie Zhang, Runyi Hu, Xiaoya Li, Tianwei Zhang, Jiwei Li, Fei Wu, Guoyin Wang, Eduard Hovy
This paper surveys research in the rapidly growing field of enhancing large language models (LLMs) with reinforcement learning (RL), a technique that enables LLMs to improve their performance by receiving feedback in the form of rewards based on the quality of their outputs, allowing them to generate more accurate, coherent, and contextually appropriate responses.
1 code implementation • 10 Oct 2024 • Shuhe Wang, Guoyin Wang, Yizhong Wang, Jiwei Li, Eduard Hovy, Chen Guo
Although it has demonstrated effectiveness during pre-training, there remains a lack of comprehensive analysis for the supervised fine-tuning (SFT) stage on the following points: (1) whether packing can effectively enhance training efficiency while maintaining performance, (2) the suitable size of the model and dataset for fine-tuning with the packing method, and (3) whether packing unrelated or related training samples might cause the model to either excessively disregard or over-rely on the context.
1 code implementation • 22 Sep 2024 • Aso Mahmudi, Borja Herce, Demian Inostroza Amestica, Andreas Scherbakov, Eduard Hovy, Ekaterina Vylomova
Linguistic fieldwork is an important component in language documentation and preservation.
1 code implementation • 16 Jun 2024 • Zhiwei Liu, Kailai Yang, Qianqian Xie, Christine de Kock, Sophia Ananiadou, Eduard Hovy
Misinformation is prevalent in various fields such as education, politics, health, etc., causing significant harm to society.
1 code implementation • 22 May 2024 • Sang Keun Choe, Hwijeen Ahn, Juhan Bae, Kewen Zhao, Minsoo Kang, Youngseog Chung, Adithya Pratapa, Willie Neiswanger, Emma Strubell, Teruko Mitamura, Jeff Schneider, Eduard Hovy, Roger Grosse, Eric Xing
Large language models (LLMs) are trained on a vast amount of human-written data, but data providers often remain uncredited.
1 code implementation • 28 Feb 2024 • Miao Li, Jey Han Lau, Eduard Hovy
Modern natural language generation systems with Large Language Models (LLMs) exhibit the capability to generate a plausible summary of multiple documents; however, it is uncertain if they truly possess the capability of information consolidation to generate summaries, especially on documents with opinionated information.
1 code implementation • 9 Dec 2023 • Shuhe Wang, Beiming Cao, Shengyu Zhang, Xiaoya Li, Jiwei Li, Fei Wu, Guoyin Wang, Eduard Hovy
Due to the lack of a large collection of high-quality labeled sentence pairs with textual similarity scores, existing approaches for Semantic Textual Similarity (STS) mostly rely on unsupervised techniques or training signals that are only partially correlated with textual similarity, e. g., NLI-based datasets.
no code implementations • 8 Oct 2023 • Isabelle Augenstein, Timothy Baldwin, Meeyoung Cha, Tanmoy Chakraborty, Giovanni Luca Ciampaglia, David Corney, Renee DiResta, Emilio Ferrara, Scott Hale, Alon Halevy, Eduard Hovy, Heng Ji, Filippo Menczer, Ruben Miguez, Preslav Nakov, Dietram Scheufele, Shivam Sharma, Giovanni Zagni
The emergence of tools based on Large Language Models (LLMs), such as OpenAI's ChatGPT, Microsoft's Bing Chat, and Google's Bard, has garnered immense public attention.
1 code implementation • 22 May 2023 • Zhisong Zhang, Emma Strubell, Eduard Hovy
To address this challenge, we adopt an error estimator to adaptively decide the partial selection ratio according to the current model's capability.
1 code implementation • 2 May 2023 • Miao Li, Eduard Hovy, Jey Han Lau
We present PeerSum, a novel dataset for generating meta-reviews of scientific papers.
no code implementations • 10 Nov 2022 • Evangelia Spiliopoulou, Artidoro Pagnoni, Yonatan Bisk, Eduard Hovy
This paper investigates models of event implications.
1 code implementation • 18 Oct 2022 • Zhisong Zhang, Emma Strubell, Eduard Hovy
In this work, we provide a survey of active learning (AL) for its applications in natural language processing (NLP).
1 code implementation • 9 Oct 2022 • Steven Y. Feng, Vivek Khetan, Bogdan Sacaleanu, Anatole Gershman, Eduard Hovy
We motivate and introduce CHARD: Clinical Health-Aware Reasoning across Dimensions, to investigate the capability of text generation models to act as implicit clinical knowledge bases and generate free-flow textual explanations about various health-related conditions across several dimensions.
1 code implementation • COLING 2022 • Sedrick Scott Keh, Kevin Lu, Varun Gangal, Steven Y. Feng, Harsh Jhamtani, Malihe Alikhani, Eduard Hovy
To this end, we propose PINEAPPLE: Personifying INanimate Entities by Acquiring Parallel Personification data for Learning Enhanced generation.
no code implementations • 13 Sep 2022 • Sedrick Scott Keh, Steven Y. Feng, Varun Gangal, Malihe Alikhani, Eduard Hovy
Through automatic and human evaluation, as well as qualitative analysis, we show that PANCETTA generates novel, phonetically difficult, fluent, and semantically meaningful tongue twisters.
no code implementations • 25 May 2022 • Dheeraj Rajagopal, Siamak Shakeri, Cicero Nogueira dos santos, Eduard Hovy, Chung-Ching Chang
Abstractive summarization systems based on pretrained language models often generate coherent but factually inconsistent sentences.
2 code implementations • 16 Dec 2021 • Revanth Gangi Reddy, Sai Chetan, Zhenhailong Wang, Yi R. Fung, Kathryn Conger, Ahmed Elsayed, Martha Palmer, Preslav Nakov, Eduard Hovy, Kevin Small, Heng Ji
In this work, we present NewsClaims, a new benchmark for attribute-aware claim detection in the news domain.
2 code implementations • 6 Dec 2021 • Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang
Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.
no code implementations • 31 Oct 2021 • Dheeraj Rajagopal, Vivek Khetan, Bogdan Sacaleanu, Anatole Gershman, Andrew Fano, Eduard Hovy
To enable better controllability, we propose to study the commonsense reasoning as a template filling task (TemplateCSR) -- where the language models fills reasoning templates with the given constraints as control factors.
1 code implementation • 24 Oct 2021 • Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Peter Clark, Yiming Yang, Eduard Hovy
Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence.
no code implementations • 20 Oct 2021 • Xiaofei Sun, Diyi Yang, Xiaoya Li, Tianwei Zhang, Yuxian Meng, Han Qiu, Guoyin Wang, Eduard Hovy, Jiwei Li
Neural network models have achieved state-of-the-art performances in a wide range of natural language processing (NLP) tasks.
1 code implementation • EMNLP 2021 • Harsh Jhamtani, Varun Gangal, Eduard Hovy, Taylor Berg-Kirkpatrick
Humans often employ figurative language use in communication, including during interactions with dialog systems.
1 code implementation • Findings (EMNLP) 2021 • Yohan Jo, Haneul Yoo, JinYeong Bak, Alice Oh, Chris Reed, Eduard Hovy
Finding counterevidence to statements is key to many tasks, including counterargument generation.
1 code implementation • AKBC Workshop CSKB 2021 • Steven Y. Feng, Kevin Lu, Zhuofu Tao, Malihe Alikhani, Teruko Mitamura, Eduard Hovy, Varun Gangal
We investigate the use of multimodal information contained in images as an effective method for enhancing the commonsense of Transformer models for text generation.
1 code implementation • INLG (ACL) 2021 • Steven Y. Feng, Jessica Huynh, Chaitanya Narisetty, Eduard Hovy, Varun Gangal
We motivate and propose a suite of simple but effective improvements for concept-to-text generation called SAPPHIRE: Set Augmentation and Post-hoc PHrase Infilling and REcombination.
1 code implementation • ACL 2021 • Dongyeop Kang, Eduard Hovy
This paper provides the benchmark corpus (XSLUE) that combines existing datasets and collects a new one for sentence-level cross-style language understanding and evaluation.
1 code implementation • ACL 2021 • Ruifan Li, Hao Chen, Fangxiang Feng, Zhanyu Ma, Xiaojie Wang, Eduard Hovy
To overcome these challenges, in this paper, we propose a dual graph convolutional networks (DualGCN) model that considers the complementarity of syntax structures and semantic correlations simultaneously.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+2
no code implementations • ACL (SIGMORPHON) 2021 • Maria Ryskina, Eduard Hovy, Taylor Berg-Kirkpatrick, Matthew R. Gormley
Traditionally, character-level transduction problems have been solved with finite-state models designed to encode structural and linguistic knowledge of the underlying process, whereas recent approaches rely on the power and flexibility of sequence-to-sequence models with attention.
1 code implementation • Findings (ACL) 2021 • Varun Gangal, Harsh Jhamtani, Eduard Hovy, Taylor Berg-Kirkpatrick
Multiple different responses are often plausible for a given open domain dialog context.
1 code implementation • ACL 2021 • Rishabh Bhardwaj, Navonil Majumder, Soujanya Poria, Eduard Hovy
In this work, we provide deeper theoretical analysis and empirical observations on the identifiability of attention weights.
1 code implementation • 17 May 2021 • Yohan Jo, Seojin Bang, Chris Reed, Eduard Hovy
While argument mining has achieved significant success in classifying argumentative relations between statements (support, attack, and neutral), we have a limited computational understanding of logical mechanisms that constitute those relations.
1 code implementation • AKBC Workshop CSKB 2021 • Aman Madaan, Dheeraj Rajagopal, Niket Tandon, Yiming Yang, Eduard Hovy
Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence.
1 code implementation • Findings (ACL) 2021 • Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard Hovy
In this paper, we present a comprehensive and unifying survey of data augmentation for NLP by summarizing the literature in a structured manner.
1 code implementation • 14 Apr 2021 • Varun Gangal, Steven Y. Feng, Malihe Alikhani, Teruko Mitamura, Eduard Hovy
In this paper, we propose and investigate the task of Narrative Reordering (NAREOR) which involves rewriting a given story in a different narrative order while preserving its plot.
2 code implementations • NAACL 2021 • Yiwei Lyu, Paul Pu Liang, Hai Pham, Eduard Hovy, Barnabás Póczos, Ruslan Salakhutdinov, Louis-Philippe Morency
Many of the existing style transfer benchmarks primarily focus on individual high-level semantic changes (e. g. positive to negative), which enable controllability at a high level but do not offer fine-grained control involving sentence structure, emphasis, and content of the sentence.
1 code implementation • CSRR (ACL) 2022 • Dheeraj Rajagopal, Aman Madaan, Niket Tandon, Yiming Yang, Shrimai Prabhumoye, Abhilasha Ravichander, Peter Clark, Eduard Hovy
Recently, models have been shown to predict the effects of unexpected situations, e. g., would cloudy skies help or hinder plant growth?
2 code implementations • EMNLP 2021 • Dheeraj Rajagopal, Vidhisha Balachandran, Eduard Hovy, Yulia Tsvetkov
We introduce SelfExplain, a novel self-explaining model that explains a text classifier's predictions using phrase-based concepts.
2 code implementations • EACL 2021 • Abhilasha Ravichander, Siddharth Dalmia, Maria Ryskina, Florian Metze, Eduard Hovy, Alan W Black
When Question-Answering (QA) systems are deployed in the real world, users query them through a variety of interfaces, such as speaking to voice assistants, typing questions into a search engine, or even translating questions to languages supported by the QA system.
1 code implementation • 1 Feb 2021 • Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg
In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge?
1 code implementation • Joint Conference on Lexical and Computational Semantics 2020 • Abhilasha Ravichander, Eduard Hovy, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung
In particular, we demonstrate through a simple consistency probe that the ability to correctly retrieve hypernyms in cloze tasks, as used in prior work, does not correspond to systematic knowledge in BERT.
1 code implementation • EMNLP (BlackboxNLP) 2020 • Andrew Runge, Eduard Hovy
Neural methods for embedding entities are typically extrinsically evaluated on downstream tasks and, more recently, intrinsically using probing tasks.
1 code implementation • EMNLP 2020 • Xiang Kong, Zhisong Zhang, Eduard Hovy
In this work, we introduce a novel local autoregressive translation (LAT) mechanism into non-autoregressive translation (NAT) models so as to capture local dependencies among tar-get outputs.
Ranked #2 on
Machine Translation
on WMT2016 English-Romanian
no code implementations • Findings of the Association for Computational Linguistics 2020 • Evangelia Spiliopoulou, Salvador Medina Maza, Eduard Hovy, Alexander Hauptmann
Furthermore, the classification of information in real-time systems requires training on out-of-domain data, as we do not have any data from a new emerging crisis.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zhisong Zhang, Xiang Kong, Lori Levin, Eduard Hovy
Recently, pre-training contextualized encoders with language model (LM) objectives has been shown an effective semi-supervised method for structured prediction.
no code implementations • EMNLP 2020 • Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi Mishra, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, Eduard Hovy
Our solution is a new task formulation where given just a procedural text as input, the task is to generate a set of state change tuples(entity, at-tribute, before-state, after-state)for each step, where the entity, attribute, and state values must be predicted from an open vocabulary.
no code implementations • 22 Oct 2020 • Aman Madaan, Dheeraj Rajagopal, Yiming Yang, Abhilasha Ravichander, Eduard Hovy, Shrimai Prabhumoye
Reasoning about events and tracking their influences is fundamental to understanding processes.
no code implementations • 14 Oct 2020 • Yuxian Meng, Chun Fan, Zijun Sun, Eduard Hovy, Fei Wu, Jiwei Li
Any prediction from a model is made by a combination of learning history and test stimuli.
no code implementations • EMNLP 2020 • Dongyeop Kang, Eduard Hovy
To address that, we propose a self-supervised text planner SSPlanner that predicts what to say first (content prediction), then guides the pretrained language model (surface realization) using the predicted content.
no code implementations • 8 Oct 2020 • Varun Gangal, Eduard Hovy
Next, we find that linear combinations of these heads, estimated with approx 11% of available total event argument detection supervision, can push performance well-higher for some roles - highest two being Victim (68. 29% Accuracy) and Artifact(58. 82% Accuracy).
1 code implementation • EMNLP 2020 • Yohan Jo, Seojin Bang, Emaad Manzoor, Eduard Hovy, Chris Reed
Finding attackable sentences in an argument is the first step toward successful refutation in argumentation.
1 code implementation • EMNLP 2020 • Yohan Jo, Jacky Visser, Chris Reed, Eduard Hovy
Our study may inform future research on argument mining and the semantics of these rhetorical devices in argumentation.
2 code implementations • EMNLP (DeeLIO) 2020 • Steven Y. Feng, Varun Gangal, Dongyeop Kang, Teruko Mitamura, Eduard Hovy
We also examine the relationship between the amount of augmentation and the quality of the generated text.
no code implementations • ICLR 2021 • Divyansh Kaushik, Amrith Setlur, Eduard Hovy, Zachary C. Lipton
In attempts to produce ML models less reliant on spurious patterns in NLP datasets, researchers have recently proposed curating counterfactually augmented data (CAD) via a human-in-the-loop process in which given some documents and their (initial) labels, humans must revise the text to make a counterfactual label applicable.
no code implementations • ACL 2020 • Zhisong Zhang, Xiang Kong, Zhengzhong Liu, Xuezhe Ma, Eduard Hovy
It remains a challenge to detect implicit arguments, calling for more future work of document-level modeling for this task.
1 code implementation • ACL 2020 • Shi Zong, Alan Ritter, Eduard Hovy
We present a number of linguistic metrics which are computed over text associated with people's predictions about the future including: uncertainty, readability, and emotion.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Dheeraj Rajagopal, Niket Tandon, Bhavana Dalvi, Peter Clark, Eduard Hovy
We address the task of explaining the effects of perturbations in procedural text, an important test of process comprehension.
no code implementations • EACL 2021 • Abhilasha Ravichander, Yonatan Belinkov, Eduard Hovy
Although neural models have achieved impressive results on several NLP benchmarks, little is understood about the mechanisms they use to perform language tasks.
no code implementations • LREC 2020 • Yohan Jo, Elijah Mayfield, Chris Reed, Eduard Hovy
We introduce a corpus of the 2016 U. S. presidential debates and commentary, containing 4, 648 argumentative propositions annotated with fine-grained proposition types.
1 code implementation • ACL 2020 • Xiang Kong, Varun Gangal, Eduard Hovy
We introduce SCDE, a dataset to evaluate the performance of computational models through sentence prediction.
Ranked #3 on
Question Answering
on SCDE
1 code implementation • ICLR 2021 • Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy
In this work, we propose a new generative model that is capable of automatically decoupling global and local representations of images in an entirely unsupervised setting, by embedding a generative flow in the VAE framework to model the decoder.
1 code implementation • 11 Nov 2019 • Xiang Kong, Xianyang Chen, Eduard Hovy
Specifically, embeddings of entities and relationships are first decompressed to a more expressive and robust space by decompressing functions, then knowledge graph embedding models are trained in this new feature space.
Ranked #28 on
Link Prediction
on FB15k-237
13 code implementations • CVPR 2020 • Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V. Le
During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher.
Ranked #16 on
Image Classification
on ImageNet ReaL
(using extra training data)
2 code implementations • 9 Nov 2019 • Dongyeop Kang, Eduard Hovy
This paper provides the benchmark corpus (xSLUE) that combines existing datasets and collects a new one for sentence-level cross-style language understanding and evaluation.
1 code implementation • WS 2019 • Vaibhav Vaibhav, Raghuram Mandyam Annasamy, Eduard Hovy
The rising growth of fake news and misleading information through online media outlets demands an automatic method for detecting such news articles.
2 code implementations • ICLR 2020 • Divyansh Kaushik, Eduard Hovy, Zachary C. Lipton
While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e. g., mentions of genre), models trained on the combined data are less sensitive to this signal.
1 code implementation • COLING 2020 • Evangelia Spiliopoulou, Artidoro Pagnoni, Eduard Hovy
Advances in word representations have shown tremendous improvements in downstream NLP tasks, but lack semantic interpretability.
3 code implementations • 5 Sep 2019 • Takashi Shibuya, Eduard Hovy
When an entity name contains other names within it, the identification of all combinations of names can become difficult and expensive.
Ranked #2 on
Nested Mention Recognition
on ACE 2005
2 code implementations • IJCNLP 2019 • Xuezhe Ma, Chunting Zhou, Xi-An Li, Graham Neubig, Eduard Hovy
Most sequence-to-sequence (seq2seq) models are autoregressive; they generate each token by conditioning on previously generated tokens.
Ranked #3 on
Machine Translation
on WMT2016 English-Romanian
1 code implementation • IJCNLP 2019 • Dongyeop Kang, Varun Gangal, Eduard Hovy
Stylistic variation in text needs to be studied with different aspects including the writer's personal traits, interpersonal relations, rhetoric, and more.
1 code implementation • IJCNLP 2019 • Taehee Jung, Dongyeop Kang, Lucas Mentch, Eduard Hovy
We find that while position exhibits substantial bias in news articles, this is not the case, for example, with academic papers and meeting minutes.
1 code implementation • IJCNLP 2019 • Dongyeop Kang, Hiroaki Hayashi, Alan W. black, Eduard Hovy
In order to produce a coherent flow of text, we explore two forms of intersentential relations in a paragraph: one is a human-created linguistical relation that forms a structure (e. g., discourse tree) and the other is a relation from latent representation learned from the sentences themselves.
no code implementations • WS 2019 • Dheeraj Rajagopal, Nidhi Vyas, Aditya Siddhant, Anirudha Rayasam, T, Niket on, Eduard Hovy
Domain adaptation remains one of the most challenging aspects in the wide-spread use of Semantic Role Labeling (SRL) systems.
no code implementations • WS 2019 • Yohan Jo, Jacky Visser, Chris Reed, Eduard Hovy
Propositions are the basic units of an argument and the primary building blocks of most argument mining systems.
no code implementations • ACL 2019 • Aakanksha Naik, Ravich, Abhilasha er, Carolyn Rose, Eduard Hovy
In this work, we show that existing embedding models are inadequate at constructing representations that capture salient aspects of mathematical meaning for numbers, which is important for language understanding.
1 code implementation • ACL 2019 • Naoki Otani, Eduard Hovy
In sentiment detection, the natural language processing community has focused on determining holders, facets, and valences, but has paid little attention to the reasons for sentiment decisions.
1 code implementation • ACL 2019 • Zhisong Zhang, Xuezhe Ma, Eduard Hovy
In this paper, we investigate the aspect of structured output modeling for the state-of-the-art graph-based neural dependency parser (Dozat and Manning, 2017).
no code implementations • NAACL 2019 • Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, Eduard Hovy
Modeling what makes a request persuasive - eliciting the desired response from a reader - is critical to the study of propaganda, behavioral economics, and advertising.
no code implementations • NAACL 2019 • Hiroshi Echizen{'}ya, Kenji Araki, Eduard Hovy
We designate this metric as Word Embedding-Based automatic MT evaluation using Word Position Information (WE{\_}WPI).
no code implementations • NAACL 2019 • Pradeep Dasigi, Matt Gardner, Shikhar Murty, Luke Zettlemoyer, Eduard Hovy
Training semantic parsers from question-answer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer.
1 code implementation • 8 May 2019 • Soujanya Poria, Navonil Majumder, Rada Mihalcea, Eduard Hovy
Emotion is intrinsic to humans and consequently emotion understanding is a key part of human-like artificial intelligence (AI).
Ranked #6 on
Emotion Recognition in Conversation
on EC
20 code implementations • NeurIPS 2020 • Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, Quoc V. Le
In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
Ranked #1 on
Sentiment Analysis
on Amazon Review Full
no code implementations • 24 Feb 2019 • Aditi Chaudhary, Siddharth Dalmia, Junjie Hu, Xinjian Li, Austin Matthews, Aldrian Obaja Muis, Naoki Otani, Shruti Rijhwani, Zaid Sheikh, Nidhi Vyas, Xinyi Wang, Jiateng Xie, Ruochen Xu, Chunting Zhou, Peter J. Jansen, Yiming Yang, Lori Levin, Florian Metze, Teruko Mitamura, David R. Mortensen, Graham Neubig, Eduard Hovy, Alan W. black, Jaime Carbonell, Graham V. Horwood, Shabnam Tafreshi, Mona Diab, Efsun S. Kayi, Noura Farra, Kathleen McKeown
This paper describes the ARIEL-CMU submissions to the Low Resource Human Language Technologies (LoReHLT) 2018 evaluations for the tasks Machine Translation (MT), Entity Discovery and Linking (EDL), and detection of Situation Frames in Text and Speech (SF Text and Speech).
2 code implementations • NeurIPS 2019 • Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy
Flow-based generative models, conceptually attractive due to tractability of both the exact log-likelihood computation and latent-variable inference, and efficiency of both training and sampling, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations.
Ranked #3 on
Image Generation
on CelebA 256x256
no code implementations • 22 Jan 2019 • Xiang Kong, Bohan Li, Graham Neubig, Eduard Hovy, Yiming Yang
In this work, we propose a method for neural dialogue response generation that allows not only generating semantically reasonable responses according to the dialogue history, but also explicitly controlling the sentiment of the response via sentiment labels.
1 code implementation • CONLL 2019 • Abhilasha Ravichander, Aakanksha Naik, Carolyn Rose, Eduard Hovy
Quantitative reasoning is a higher-order reasoning skill that any intelligent natural language understanding system can reasonably be expected to handle.
no code implementations • ICLR 2019 • Xuezhe Ma, Chunting Zhou, Eduard Hovy
Variational Autoencoder (VAE), a simple and effective deep generative model, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations.
no code implementations • 21 Nov 2018 • Xiang Kong, Zhaopeng Tu, Shuming Shi, Eduard Hovy, Tong Zhang
Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation.
Ranked #34 on
Machine Translation
on WMT2014 English-German
2 code implementations • NAACL 2019 • Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, Nanyun Peng
Different languages might have different word orders.
no code implementations • 1 Oct 2018 • Filip Ilievski, Eduard Hovy, Qizhe Xie, Piek Vossen
The human mind is a powerful multifunctional knowledge storage and management system that performs generalization, type inference, anomaly detection, stereotyping, and other tasks.
no code implementations • 27 Sep 2018 • Danish Pruthi, Mansi Gupta, Nitish Kumar Kulkarni, Graham Neubig, Eduard Hovy
Neural models achieve state-of-the-art performance due to their ability to extract salient features useful to downstream tasks.
1 code implementation • 25 Sep 2018 • Xiang Kong, Qizhe Xie, Zihang Dai, Eduard Hovy
Mixture of Softmaxes (MoS) has been shown to be effective at addressing the expressiveness limitation of Softmax-based models.
Ranked #18 on
Machine Translation
on WMT2014 English-French
1 code implementation • EMNLP 2018 • Zhengzhong Liu, Chenyan Xiong, Teruko Mitamura, Eduard Hovy
Our analyses demonstrate that our neural model captures interesting connections between salience and discourse unit relations (e. g., scripts and frame structures).
no code implementations • COLING 2018 • Zhengzhong Liu, Teruko Mitamura, Eduard Hovy
In this paper, we study two types of relation: Event Coreference and Event Sequencing.
1 code implementation • ACL 2018 • Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, Taylor Berg-Kirkpatrick
This paper examines the problem of generating natural language descriptions of chess games.
no code implementations • 13 Jun 2018 • Zhengzhong Liu, Teruko Mitamura, Eduard Hovy
In this paper, we study two types of relation: Event Coreference and Event Sequencing.
1 code implementation • ACL 2018 • Dongyeop Kang, Tushar Khot, Ashish Sabharwal, Eduard Hovy
We consider the problem of learning textual entailment models with limited supervision (5K-10K training examples), and present two complementary approaches for it.
3 code implementations • ACL 2018 • Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, Eduard Hovy
Combining pointer networks~\citep{vinyals2015pointer} with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion.
Ranked #14 on
Dependency Parsing
on Penn Treebank
1 code implementation • ACL 2018 • Zihang Dai, Qizhe Xie, Eduard Hovy
In this work, we study the credit assignment problem in reward augmented maximum likelihood (RAML) learning, and establish a theoretical equivalence between the token-level counterpart of RAML and the entropy regularized reinforcement learning.
1 code implementation • NAACL 2018 • Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine van Zuylen, Sebastian Kohlmeier, Eduard Hovy, Roy Schwartz
In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline.
no code implementations • ICLR 2018 • Qizhe Xie, Guokun Lai, Zihang Dai, Eduard Hovy
Cloze test is widely adopted in language exams to evaluate students' language proficiency.
2 code implementations • 23 Nov 2017 • Anant Subramanian, Danish Pruthi, Harsh Jhamtani, Taylor Berg-Kirkpatrick, Eduard Hovy
We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec.
no code implementations • EMNLP 2018 • Qizhe Xie, Guokun Lai, Zihang Dai, Eduard Hovy
Cloze tests are widely adopted in language exams to evaluate students' language proficiency.
no code implementations • IJCNLP 2017 • Jiarui Xu, Xuezhe Ma, Chen-Tse Tsai, Eduard Hovy
This paper aims to provide an effective tool for conversion between Simplified Chinese and Traditional Chinese.
no code implementations • WS 2017 • Bahar Salehi, Dirk Hovy, Eduard Hovy, Anders S{\o}gaard
Geolocation is the task of identifying a social media user{'}s primary location, and in natural language processing, there is a growing literature on to what extent automated analysis of social media posts can help.
1 code implementation • WS 2017 • Harsh Jhamtani, Varun Gangal, Eduard Hovy, Eric Nyberg
Variations in writing styles are commonly used to adapt the content to a specific context, audience, or purpose.
no code implementations • EMNLP 2017 • Diyi Yang, Aaron Halfaker, Robert Kraut, Eduard Hovy
Most studies on human editing focus merely on syntactic revision operations, failing to capture the intentions behind revision changes, which are essential for facilitating the single and collaborative writing process.
no code implementations • SEMEVAL 2017 • Sujay Kumar Jauhar, Eduard Hovy
Creating annotated frame lexicons such as PropBank and FrameNet is expensive and labor intensive.
no code implementations • WS 2017 • Hyeju Jang, Keith Maki, Eduard Hovy, Carolyn Ros{\'e}
In this paper, we present a novel and highly effective method for induction and application of metaphor frame templates as a step toward detecting metaphor in extended discourse.
no code implementations • WS 2017 • Evangelia Spiliopoulou, Eduard Hovy, Teruko Mitamura
Recent methods for Event Detection focus on Deep Learning for automatic feature generation and feature ranking.
1 code implementation • EMNLP 2017 • Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, Eduard Hovy
Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.
2 code implementations • 4 Jul 2017 • Harsh Jhamtani, Varun Gangal, Eduard Hovy, Eric Nyberg
Variations in writing styles are commonly used to adapt the content to a specific context, audience, or purpose.
1 code implementation • EMNLP 2017 • Varun Gangal, Harsh Jhamtani, Graham Neubig, Eduard Hovy, Eric Nyberg
Portmanteaus are a word formation phenomenon where two words are combined to form a new word.
no code implementations • NeurIPS 2017 • Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, Graham Neubig
Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning.
no code implementations • ICLR 2018 • Xuezhe Ma, Pengcheng Yin, Jingzhou Liu, Graham Neubig, Eduard Hovy
Reward augmented maximum likelihood (RAML), a simple and effective learning framework to directly optimize towards the reward function in structured prediction tasks, has led to a number of impressive empirical successes.
1 code implementation • ACL 2017 • Pradeep Dasigi, Waleed Ammar, Chris Dyer, Eduard Hovy
Type-level word embeddings use the same set of parameters to represent all instances of a word regardless of its context, ignoring the inherent lexical ambiguity in language.
no code implementations • ACL 2017 • Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy
Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness.
2 code implementations • EMNLP 2017 • Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy
We present RACE, a new dataset for benchmark evaluation of methods in the reading comprehension task.
2 code implementations • 17 Feb 2017 • Pradeep Dasigi, Gully A. P. C. Burns, Eduard Hovy, Anita de Waard
We propose a deep learning model for identifying structure within experiment narratives in scientific literature.
1 code implementation • 6 Feb 2017 • Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, Aaron Courville
In this paper, we propose to equip Generative Adversarial Networks with the ability to produce direct energy estimates for samples. Specifically, we propose a flexible adversarial training framework, and prove this framework not only ensures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimal.
Ranked #18 on
Conditional Image Generation
on CIFAR-10
(Inception score metric)
no code implementations • IJCNLP 2017 • Xuezhe Ma, Eduard Hovy
In this paper, we propose a probabilistic parsing model, which defines a proper conditional probability distribution over non-projective dependency trees for a given sentence, using neural representations as inputs.
no code implementations • 18 Nov 2016 • Volkan Cirik, Eduard Hovy, Louis-Philippe Morency
Curriculum Learning emphasizes the order of training instances in a computational learning setup.
no code implementations • 26 Sep 2016 • Xuezhe Ma, Yingkai Gao, Zhiting Hu, Yao-Liang Yu, Yuntian Deng, Eduard Hovy
Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an \emph{explicit} control of the gap.
no code implementations • ACL 2016 • Shomir Wilson, Florian Schaub, Aswarth Abhilash Dara, Frederick Liu, Sushain Cherivirala, Pedro Giovanni Leon, Mads Schaarup Andersen, Sebastian Zimmeck, Kanthashree Mysore Sathyendra, N. Cameron Russell, Thomas B. Norton, Eduard Hovy, Joel Reidenberg, Norman Sadeh
no code implementations • LREC 2016 • Diyi Yang, Aaron Halfaker, Robert Kraut, Eduard Hovy
In this work, we introduced a corpus for categorizing edit types in Wikipedia.
2 code implementations • ACL 2016 • Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, Eric Xing
Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models.
Ranked #66 on
Sentiment Analysis
on SST-2 Binary classification
no code implementations • NAACL 2016 • Xuezhe Ma, Zhengzhong Liu, Eduard Hovy
Coreference resolution is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community.
25 code implementations • ACL 2016 • Xuezhe Ma, Eduard Hovy
State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing.
Ranked #10 on
Named Entity Recognition (NER)
on CoNLL++
no code implementations • 12 Feb 2016 • Sujay Kumar Jauhar, Peter Turney, Eduard Hovy
We describe two new related resources that facilitate modelling of general knowledge reasoning in 4th grade science exams.
no code implementations • 6 Jul 2015 • Jiwei Li, Eduard Hovy
In this paper, we described possible directions for deeper understanding, helping bridge the gap between psychology / cognitive science and computational approaches in sentiment/opinion analysis literature.
1 code implementation • NAACL 2016 • Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky
While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret.
no code implementations • 28 Feb 2015 • Jiwei Li, Eduard Hovy
It is commonly accepted that machine translation is a more complex task than part of speech tagging.
2 code implementations • HLT 2015 • Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, Noah A. Smith
Vector space word representations are learned from distributional information of words in large corpora.
no code implementations • 30 Oct 2014 • Jiwei Li, Xun Wang, Eduard Hovy
While it has long been believed in psychology that weather somehow influences human's mood, the debates have been going on for decades about how they are correlated.
no code implementations • LREC 2014 • Siddharth Jain, Archna Bhatia, Angelique Rein, Eduard Hovy
In this paper, we present a new corpus for social roles in online contentious discussions.
no code implementations • LREC 2014 • Zhengzhong Liu, Jun Araki, Eduard Hovy, Teruko Mitamura
Event coreference is an important task for full text analysis.
no code implementations • LREC 2014 • Jun Araki, Zhengzhong Liu, Eduard Hovy, Teruko Mitamura
First, we introduce a multiclass logistic regression model that can detect subevent relations in addition to full coreference.
no code implementations • LREC 2012 • Anselmo Pe{\~n}as, Eduard Hovy, Pamela Forner, {\'A}lvaro Rodrigo, Richard Sutcliffe, Corina Forascu, Caroline Sporleder
The evaluation architecture is completely multilingual: test documents, questions, and their answers are identical in all the supported languages.