1 code implementation • ACL 2022 • Hideo Kobayashi, Yufang Hou, Vincent Ng
We examine the extent to which supervised bridging resolvers can be improved without employing additional labeled bridging data by proposing a novel constrained multi-task learning framework for bridging resolution, within which we (1) design cross-task consistency constraints to guide the learning process; (2) pre-train the entity coreference model in the multi-task framework on the large amount of publicly available coreference data; and (3) integrating prior knowledge encoded in rule-based resolvers.
no code implementations • ACL 2022 • Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, Mark Warschauer
Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.
no code implementations • LNLS (ACL) 2022 • Ryokan Ri, Yufang Hou, Radu Marinescu, Akihiro Kishimoto
When mapping a natural language instruction to a sequence of actions, it is often useful toidentify sub-tasks in the instruction.
no code implementations • NAACL (sdp) 2021 • Khalid Al Khatib, Tirthankar Ghosal, Yufang Hou, Anita de Waard, Dayne Freitag
Argument mining targets structures in natural language related to interpretation and persuasion which are central to scientific communication.
1 code implementation • ACL 2022 • Zhenjie Zhao, Yufang Hou, Dakuo Wang, Mo Yu, Chengzhong Liu, Xiaojuan Ma
Generating educational questions of fairytales or storybooks is vital for improving children's literacy ability.
1 code implementation • 26 Mar 2022 • Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Jia-Jun Li, Nora Bradford, Branda Sun, Tran Bao Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, Mark Warschauer
Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.
Ranked #1 on
Question Generation
on FairytaleQA
no code implementations • EMNLP (ArgMining) 2021 • Roni Friedman, Lena Dankin, Yufang Hou, Ranit Aharonov, Yoav Katz, Noam Slonim
We describe the 2021 Key Point Analysis (KPA-2021) shared task on key point analysis that we organized as a part of the 8th Workshop on Argument Mining (ArgMining 2021) at EMNLP 2021.
1 code implementation • NeurIPS 2021 • Hoang Thanh Lam, Gabriele Picco, Yufang Hou, Young-suk Lee, Lam M. Nguyen, Dzung T. Phan, Vanessa López, Ramon Fernandez Astudillo
In many machine learning tasks, models are trained to predict structure data such as graphs.
Ranked #2 on
AMR Parsing
on LDC2020T02
1 code implementation • Findings (EMNLP) 2021 • Yufang Hou
In this paper, we propose an end-to-end neural approach for information status classification.
1 code implementation • ACL 2021 • Khalid Al Khatib, Lukas Trautner, Henning Wachsmuth, Yufang Hou, Benno Stein
Generating high-quality arguments, while being challenging, may benefit a wide range of downstream applications, such as writing assistants and argument search engines.
no code implementations • 2 Jun 2021 • Ishani Mondal, Yufang Hou, Charles Jochim
This paper studies the end-to-end construction of an NLP Knowledge Graph (KG) from scientific papers.
1 code implementation • NAACL 2021 • Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy X. R. Wang
Presentations are critical for communication in all areas of our lives, yet the creation of slide decks is often tedious and time-consuming.
no code implementations • NAACL 2021 • Onkar Pandit, Yufang Hou
We probe pre-trained transformer language models for bridging inference.
2 code implementations • Findings (EMNLP) 2021 • Xuye Liu, Dakuo Wang, April Wang, Yufang Hou, Lingfei Wu
Jupyter notebook allows data scientists to write machine learning code together with its documentation in cells.
no code implementations • ACL (GEM) 2021 • Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Aremu Anuoluwapo, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna Clinciu, Dipanjan Das, Kaustubh D. Dhole, Wanyu Du, Esin Durmus, Ondřej Dušek, Chris Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Rubungo Andre Niyongabo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, Jiawei Zhou
We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics.
Ranked #1 on
Data-to-Text Generation
on WebNLG ru
Abstractive Text Summarization
Cross-Lingual Abstractive Summarization
+5
1 code implementation • EACL 2021 • Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, Debasis Ganguly
Tasks, Datasets and Evaluation Metrics are important concepts for understanding experimental scientific papers.
1 code implementation • COLING 2020 • Yufang Hou
Previous work on bridging anaphora recognition (Hou et al., 2013a) casts the problem as a subtask of learning fine-grained information status (IS).
no code implementations • LREC 2020 • Francesca Bonin, Martin Gleize, Ailbhe Finnerty, C. Moore, ice, Charles Jochim, Emma Norris, Yufang Hou, Alison J. Wright, Debasis Ganguly, Emily Hayes, Silje Zink, Aless Pascale, ra, Pol Mac Aonghusa, Susan Michie
Due to the fast pace at which research reports in behaviour change are published, researchers, consultants and policymakers would benefit from more automatic ways to process these reports.
1 code implementation • ACL 2020 • Yufang Hou
Most previous studies on bridging anaphora resolution (Poesio et al., 2004; Hou et al., 2013b; Hou, 2018a) use the pairwise model to tackle the problem and assume that the gold mention information is given.
no code implementations • 25 Nov 2019 • Liat Ein-Dor, Eyal Shnarch, Lena Dankin, Alon Halfon, Benjamin Sznajder, Ariel Gera, Carlos Alzate, Martin Gleize, Leshem Choshen, Yufang Hou, Yonatan Bilu, Ranit Aharonov, Noam Slonim
One of the main tasks in argument mining is the retrieval of argumentative content pertaining to a given topic.
no code implementations • IJCNLP 2019 • Shai Erera, Michal Shmueli-Scheuer, Guy Feigenblat, Ora Peled Nakash, Odellia Boni, Haggai Roitman, Doron Cohen, Bar Weiner, Yosi Mass, Or Rivlin, Guy Lev, Achiya Jerbi, Jonathan Herzig, Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, David Konopnicki
We present a novel system providing summaries for Computer Science publications.
no code implementations • 13 Aug 2019 • Yufang Hou
Previous work on bridging anaphora recognition (Hou et al., 2013a) casts the problem as a subtask of learning fine-grained information status (IS).
1 code implementation • ACL 2019 • Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, Debasis Ganguly
While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult.
no code implementations • WS 2019 • Yufang Hou, Debasis Ganguly, Lea A. Deleris, Francesca Bonin
Population age information is an essential characteristic of clinical trials.
no code implementations • EMNLP 2018 • Yufang Hou
Additionally, we further improve the results for bridging anaphora resolution reported in Hou (2018) by combining our simple deterministic approach with Hou et al.(2013b)'s best system MLN II.
no code implementations • ACL 2018 • Eyal Shnarch, Carlos Alzate, Lena Dankin, Martin Gleize, Yufang Hou, Leshem Choshen, Ranit Aharonov, Noam Slonim
We propose a methodology to blend high quality but scarce strong labeled data with noisy but abundant weak labeled data during the training of neural networks.
no code implementations • CL 2018 • Yufang Hou, Katja Markert, Michael Strube
The second stage, bridging antecedent selection, finds the antecedents for all predicted bridging anaphors.
no code implementations • NAACL 2018 • L{\'e}a Deleris, Francesca Bonin, Elizabeth Daly, St{\'e}phane Deparis, Yufang Hou, Charles Jochim, Yassine Lassoued, Killian Levacher
Having an understanding of interpersonal relationships is helpful in many contexts.
no code implementations • NAACL 2018 • Yufang Hou
Most current models of word representations(e. g., GloVe) have successfully captured fine-grained semantics.
no code implementations • WS 2017 • Yufang Hou, Charles Jochim
In this paper, we address the problem of argument relation classification where argument units are from different texts.
no code implementations • ACL 2017 • Henning Wachsmuth, Nona Naderi, Ivan Habernal, Yufang Hou, Graeme Hirst, Iryna Gurevych, Benno Stein
Argumentation quality is viewed differently in argumentation theory and in practical assessment approaches.
no code implementations • EACL 2017 • Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, Benno Stein
Research on computational argumentation faces the problem of how to automatically assess the quality of an argument or argumentation.
no code implementations • COLING 2016 • Yufang Hou
Information status plays an important role in discourse processing.