Search Results for author: Xiang Ren

Found 139 papers, 80 papers with code

Using Word Embedding to Reveal Monetary Policy Explanation Changes

no code implementations EMNLP (ECONLP) 2021 Akira Matsui, Xiang Ren, Emilio Ferrara

Documents have been an essential tool of communication for governments to announce their policy operations.

Sentiment Analysis

Modality-specific Distillation

no code implementations NAACL (maiworkshop) 2021 Woojeong Jin, Maziar Sanjabi, Shaoliang Nie, Liang Tan, Xiang Ren, Hamed Firooz

In this paper, we propose modality-specific distillation (MSD) to effectively transfer knowledge from a teacher on multimodal datasets.

Knowledge Distillation Meta-Learning

Knowledge-Augmented Methods for Natural Language Processing

no code implementations ACL 2022 Chenguang Zhu, Yichong Xu, Xiang Ren, Bill Lin, Meng Jiang, Wenhao Yu

Knowledge in natural language processing (NLP) has been a rising trend especially after the advent of large scale pre-trained models.

Natural Language Processing Text Generation

NewsEdits: A News Article Revision Dataset and a Document-Level Reasoning Challenge

no code implementations14 Jun 2022 Alexander Spangher, Xiang Ren, Jonathan May, Nanyun Peng

News article revision histories provide clues to narrative and factual evolution in news articles.

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

1 code implementation9 Jun 2022 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramón Risco Delgado, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Timothy Telleen-Lawton, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Common Sense Reasoning

RobustLR: Evaluating Robustness to Logical Perturbation in Deductive Reasoning

no code implementations25 May 2022 Soumya Sanyal, Zeyi Liao, Xiang Ren

Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in English natural language.

Eliciting Transferability in Multi-task Learning with Task-level Mixture-of-Experts

no code implementations25 May 2022 Qinyuan Ye, Juan Zha, Xiang Ren

Recent work suggests that transformer models are capable of multi-task learning on diverse NLP tasks.

Multi-Task Learning

Machine Translation Robustness to Natural Asemantic Variation

no code implementations25 May 2022 Jacob Bremerman, Xiang Ren, Jonathan May

We introduce and formalize an under-studied linguistic phenomenon we call Natural Asemantic Variation (NAV) and investigate it in the context of Machine Translation (MT) robustness.

Machine Translation Translation

ER-TEST: Evaluating Explanation Regularization Methods for NLP Models

no code implementations25 May 2022 Brihi Joshi, Aaron Chan, Ziyi Liu, Shaoliang Nie, Maziar Sanjabi, Hamed Firooz, Xiang Ren

Plus, little is understood about how ER model performance is affected by the choice of ER criteria or by the number/choice of training instances with human rationales.

Textual Backdoor Attacks with Iterative Trigger Injection

no code implementations25 May 2022 Jun Yan, Vansh Gupta, Xiang Ren

In this paper, we demonstrate that it's possible to design an effective and stealthy backdoor attack by iteratively injecting "triggers" into a small set of training data.

Backdoor Attack Hate Speech Detection +2

Cross-lingual Lifelong Learning

no code implementations23 May 2022 Meryem M'hamdi, Xiang Ren, Jonathan May

The longstanding goal of multi-lingual learning has been to develop a universal cross-lingual model that can withstand the changes in multi-lingual data distributions.

Continual Learning Transfer Learning

NS3: Neuro-Symbolic Semantic Code Search

no code implementations21 May 2022 Shushan Arakelyan, Anna Hakhverdyan, Miltiadis Allamanis, Christophe Hauser, Luis Garcia, Xiang Ren

We compare our model - NS3 (Neuro-Symbolic Semantic Search) - to a number of baselines, including state-of-the-art semantic code retrieval methods, and evaluate on two datasets - CodeSearchNet and Code Search and Question Answering.

Code Search Question Answering

On Continual Model Refinement in Out-of-Distribution Data Streams

no code implementations ACL 2022 Bill Yuchen Lin, Sida Wang, Xi Victoria Lin, Robin Jia, Lin Xiao, Xiang Ren, Wen-tau Yih

Real-world natural language processing (NLP) models need to be continually updated to fix the prediction errors in out-of-distribution (OOD) data streams while overcoming catastrophic forgetting.

Continual Learning Natural Language Processing

Unsupervised Cross-Task Generalization via Retrieval Augmentation

1 code implementation17 Apr 2022 Bill Yuchen Lin, Kangmin Tan, Chris Miller, Beiwen Tian, Xiang Ren

Humans can perform unseen tasks by recalling relevant skills that are acquired previously and then generalizing them to the target tasks, even if there is no supervision at all.

FaiRR: Faithful and Robust Deductive Reasoning over Natural Language

1 code implementation ACL 2022 Soumya Sanyal, Harman Singh, Xiang Ren

Recent works show that such models can also produce the reasoning steps (i. e., the proof graph) that emulate the model's logical reasoning process.

Fact Selection

Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-modal Knowledge Transfer

no code implementations ACL 2022 Woojeong Jin, Dong-Ho Lee, Chenguang Zhu, Jay Pujara, Xiang Ren

Pre-trained language models are still far from human performance in tasks that need understanding of properties (e. g. appearance, measurable quantity) and affordances of everyday objects in the real world since the text lacks such information due to reporting bias.

Image Captioning Language Modelling +1

UNIREX: A Unified Learning Framework for Language Model Rationale Extraction

no code implementations BigScience (ACL) 2022 Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, Hamed Firooz

An extractive rationale explains a language model's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction.

Language Modelling Text Classification

Contextualized Scene Imagination for Generative Commonsense Reasoning

1 code implementation ICLR 2022 Peifeng Wang, Jonathan Zamora, Junfeng Liu, Filip Ilievski, Muhao Chen, Xiang Ren

In this paper, we propose an Imagine-and-Verbalize (I&V) method, which learns to imagine a relational scene knowledge graph (SKG) with relations between the input concepts, and leverage the SKG as a constraint when generating a plausible scene description.

Common Sense Reasoning Story Generation

Sparse Distillation: Speeding Up Text Classification by Using Bigger Models

no code implementations16 Oct 2021 Qinyuan Ye, Madian Khabsa, Mike Lewis, Sinong Wang, Xiang Ren, Aaron Jaech

In this paper, we aim to further push the limit of inference speed by exploring a new area in the design space of the student model.

Classification Domain Generalization +2

On the Robustness of Reading Comprehension Models to Entity Renaming

1 code implementation16 Oct 2021 Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, Xiang Ren

We study the robustness of machine reading comprehension (MRC) models to entity renaming -- do models make more wrong predictions when the same questions are asked about an entity whose name has been changed?

Continual Pretraining Machine Reading Comprehension

KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering

no code implementations ACL 2022 Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, Michael Zeng

The recent proposed Fusion-in-Decoder (FiD), which is built on top of the pretrained generative model T5, achieves the state-of-the-art performance in the reading module.

Answer Generation Open-Domain Question Answering +1

AutoTriggER: Named Entity Recognition with Auxiliary Trigger Extraction

no code implementations10 Sep 2021 Dong-Ho Lee, Ravi Kiran Selvam, Sheikh Muhammad Sarwar, Bill Yuchen Lin, Mahak Agarwal, Fred Morstatter, Jay Pujara, Elizabeth Boschee, James Allan, Xiang Ren

Deep neural models for low-resource named entity recognition (NER) have shown impressive results by leveraging distant super-vision or other meta-level information (e. g. explanation).

Low Resource Named Entity Recognition named-entity-recognition +1

Discretized Integrated Gradients for Explaining Language Models

1 code implementation EMNLP 2021 Soumya Sanyal, Xiang Ren

As a prominent attribution-based explanation algorithm, Integrated Gradients (IG) is widely adopted due to its desirable explanation axioms and the ease of gradient computation.

Feature Importance Sentiment Analysis

Improving Counterfactual Generation for Fair Hate Speech Detection

no code implementations ACL (WOAH) 2021 Aida Mostafazadeh Davani, Ali Omrani, Brendan Kennedy, Mohammad Atari, Xiang Ren, Morteza Dehghani

By applying logit pairing to equalize outcomes on the restricted set of counterfactuals for each instance, we improve fairness metrics while preserving model performance on hate speech detection.

Fairness Hate Speech Detection

Do Language Models Perform Generalizable Commonsense Inference?

1 code implementation Findings (ACL) 2021 Peifeng Wang, Filip Ilievski, Muhao Chen, Xiang Ren

Inspired by evidence that pretrained language models (LMs) encode commonsense knowledge, recent work has applied LMs to automatically populate commonsense knowledge graphs (CKGs).

Knowledge Graphs Pretrained Language Models

Common Sense Beyond English: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning

1 code implementation ACL 2021 Bill Yuchen Lin, Seyeon Lee, Xiaoyang Qiao, Xiang Ren

In addition, we also create two new datasets, X-CSQA and X-CODAH, by translating their English versions to 15 other languages, so that we can evaluate popular ML-LMs for cross-lingual commonsense reasoning.

Common Sense Reasoning

CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP

2 code implementations EMNLP 2021 Qinyuan Ye, Bill Yuchen Lin, Xiang Ren

Humans can learn a new language task efficiently with only few examples, by leveraging their knowledge obtained when learning prior tasks.

Few-Shot Learning

Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation

2 code implementations EMNLP 2021 Yuning Mao, Wenchang Ma, Deren Lei, Jiawei Han, Xiang Ren

In this paper, we present a systematic analysis that studies whether current seq2seq models, especially pre-trained language models, are good enough for preserving important input concepts and to what extent explicitly guiding generation with the concepts as lexical constraints is beneficial.

Conditional Text Generation Denoising

Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning

1 code implementation Findings (EMNLP) 2021 Xisen Jin, Bill Yuchen Lin, Mohammad Rostami, Xiang Ren

The ability to continuously expand knowledge over time and utilize it to rapidly generalize to new tasks is a key feature of human linguistic intelligence.

Continual Learning Few-Shot Learning +2

FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks

1 code implementation18 Apr 2021 Bill Yuchen Lin, Chaoyang He, Zihang Zeng, Hulin Wang, Yufen Huang, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, Salman Avestimehr

Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for natural language processing (NLP) tasks.

Federated Learning Language Modelling +4

Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation

1 code implementation EMNLP 2021 Mozhdeh Gheini, Xiang Ren, Jonathan May

We study the power of cross-attention in the Transformer architecture within the context of transfer learning for machine translation, and extend the findings of studies into cross-attention when training from scratch.

Machine Translation Transfer Learning +1

Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources

no code implementations EMNLP 2021 Ninareh Mehrabi, Pei Zhou, Fred Morstatter, Jay Pujara, Xiang Ren, Aram Galstyan

In addition, we analyze two downstream models that use ConceptNet as a source for commonsense knowledge and find the existence of biases in those models as well.

Natural Language Processing

Refining Language Models with Compositional Explanations

1 code implementation NeurIPS 2021 Huihan Yao, Ying Chen, Qinyuan Ye, Xisen Jin, Xiang Ren

However, such a regularization technique lacks flexibility and coverage, since only importance scores towards a pre-defined list of features are adjusted, while more complex human knowledge such as feature interaction and pattern generalization can hardly be incorporated.

Fairness Language Modelling +1

Learning to Generate Task-Specific Adapters from Task Description

1 code implementation ACL 2021 Qinyuan Ye, Xiang Ren

Recent study further shows that they can learn to generalize to novel tasks, by including task descriptions as part of the source sequence and training the model with (source, target) examples.

Text Generation Zero-Shot Learning

Efficient Learning of Less Biased Models with Transfer Learning

no code implementations1 Jan 2021 Xisen Jin, Francesco Barbieri, Leonardo Neves, Xiang Ren

Prediction bias in machine learning models, referring to undesirable model behaviors that discriminates inputs mentioning or produced by certain group, has drawn increasing attention from the research community given its societal impact.

Transfer Learning

Pre-training Text-to-Text Transformers to Write and Reason with Concepts

no code implementations ICLR 2021 Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Xiang Ren

To augment PTLMs with common sense, we propose generative and contrastive objectives as intermediate self-supervised pre-training tasks between general pre-training and downstream task-specific fine-tuning.

Common Sense Reasoning Language Modelling +3

Learning Contextualized Knowledge Graph Structures for Commonsense Reasoning

no code implementations1 Jan 2021 Jun Yan, Mrigank Raman, Tianyu Zhang, Ryan Rossi, Handong Zhao, Sungchul Kim, Nedim Lipka, Xiang Ren

Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense) question answering and natural language inference.

Knowledge Graphs Natural Language Inference +1

ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning

2 code implementations EMNLP 2021 Rujun Han, Xiang Ren, Nanyun Peng

While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications.

Continual Pretraining Language Modelling +4

Pre-training Text-to-Text Transformers for Concept-centric Common Sense

1 code implementation24 Oct 2020 Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Bill Yuchen Lin, Xiang Ren

Pre-trained language models (PTLM) have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks.

Common Sense Reasoning Knowledge Graphs +3

Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation

2 code implementations24 Oct 2020 Yuning Mao, Xiang Ren, Heng Ji, Jiawei Han

Despite significant progress, state-of-the-art abstractive summarization methods are still prone to hallucinate content inconsistent with the source document.

Abstractive Text Summarization Keyphrase Extraction

Fair Hate Speech Detection through Evaluation of Social Group Counterfactuals

no code implementations24 Oct 2020 Aida Mostafazadeh Davani, Ali Omrani, Brendan Kennedy, Mohammad Atari, Xiang Ren, Morteza Dehghani

Counterfactual token fairness for a mentioned social group evaluates the model's predictions as to whether they are the same for (a) the actual sentence and (b) a counterfactual instance, which is generated by changing the mentioned social group in the sentence.

Fairness Hate Speech Detection

Differentiable Open-Ended Commonsense Reasoning

no code implementations NAACL 2021 Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, William W. Cohen

As a step towards making commonsense reasoning research more realistic, we propose to study open-ended commonsense reasoning (OpenCSR) -- the task of answering a commonsense question without any pre-defined choices -- using as a resource only a corpus of commonsense facts written in natural language.

Multiple-choice

On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning

no code implementations NAACL 2021 Xisen Jin, Francesco Barbieri, Brendan Kennedy, Aida Mostafazadeh Davani, Leonardo Neves, Xiang Ren

Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution.

Coreference Resolution Fairness +5

One-shot Learning for Temporal Knowledge Graphs

no code implementations AKBC 2021 Mehrnoosh Mirtaheri, Mohammad Rostami, Xiang Ren, Fred Morstatter, Aram Galstyan

Most real-world knowledge graphs are characterized by a long-tail relation frequency distribution where a significant fraction of relations occurs only a handful of times.

Knowledge Graphs Link Prediction +1

Will This Idea Spread Beyond Academia? Understanding Knowledge Transfer of Scientific Concepts across Text Corpora

no code implementations Findings of the Association for Computational Linguistics 2020 Hancheng Cao, Mengjie Cheng, Zhepeng Cen, Daniel A. McFarland, Xiang Ren

We extract scientific concepts (i. e., phrases) from corpora as instantiations of "research ideas", create concept-level features as motivated by literature, and then follow the trajectories of over 450, 000 new concepts (emerged from 1995-2014) to identify factors that lead only a small proportion of these ideas to be used in inventions and drug trials.

Transfer Learning

SynSetExpan: An Iterative Framework for Joint Entity Set Expansion and Synonym Discovery

no code implementations EMNLP 2020 Jiaming Shen, Wenda Qiu, Jingbo Shang, Michelle Vanni, Xiang Ren, Jiawei Han

To facilitate the research on studying the interplays of these two tasks, we create the first large-scale Synonym-Enhanced Set Expansion (SE2) dataset via crowdsourcing.

Two Step Joint Model for Drug Drug Interaction Extraction

no code implementations28 Aug 2020 Siliang Tang, Qi Zhang, Tianpeng Zheng, Mengdi Zhou, Zhan Chen, Lixing Shen, Xiang Ren, Yueting Zhuang, ShiLiang Pu, Fei Wu

When patients need to take medicine, particularly taking more than one kind of drug simultaneously, they should be alarmed that there possibly exists drug-drug interaction.

Drug–drug Interaction Extraction named-entity-recognition +2

Gradient-based Editing of Memory Examples for Online Task-free Continual Learning

1 code implementation NeurIPS 2021 Xisen Jin, Arka Sadhu, Junyi Du, Xiang Ren

We explore task-free continual learning (CL), in which a model is trained to avoid catastrophic forgetting in the absence of explicit task boundaries or identities.

Continual Learning

Screenplay Quality Assessment: Can We Predict Who Gets Nominated?

no code implementations WS 2020 Ming-Chang Chiu, Tiantian Feng, Xiang Ren, Shrikanth Narayanan

Toward that goal, in this work, we present a method to evaluate the quality of a screenplay based on linguistic cues.

Contextualizing Hate Speech Classifiers with Post-hoc Explanation

3 code implementations ACL 2020 Brendan Kennedy, Xisen Jin, Aida Mostafazadeh Davani, Morteza Dehghani, Xiang Ren

Hate speech classifiers trained on imbalanced datasets struggle to determine if group identifiers like "gay" or "black" are used in offensive or prejudiced ways.

Visually Grounded Continual Learning of Compositional Phrases

2 code implementations EMNLP 2020 Xisen Jin, Junyi Du, Arka Sadhu, Ram Nevatia, Xiang Ren

To study this human-like language acquisition ability, we present VisCOLL, a visually grounded language learning task, which simulates the continual acquisition of compositional phrases from streaming visual scenes.

Continual Learning Grounded language learning +1

RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms

no code implementations EMNLP 2021 Pei Zhou, Rahul Khanna, Seyeon Lee, Bill Yuchen Lin, Daniel Ho, Jay Pujara, Xiang Ren

Pre-trained language models (PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated.

IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization

1 code implementation2 May 2020 Wenxuan Zhou, Bill Yuchen Lin, Xiang Ren

Fine-tuning pre-trained language models (PTLMs), such as BERT and its better variant RoBERTa, has been a common practice for advancing performance in natural language understanding (NLU) tasks.

Natural Language Understanding Representation Learning

Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models

no code implementations EMNLP 2020 Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, Xiang Ren

Recent works show that pre-trained language models (PTLMs), such as BERT, possess certain commonsense and factual knowledge.

ForecastQA: A Question Answering Challenge for Event Forecasting with Temporal Text Data

no code implementations ACL 2021 Woojeong Jin, Rahul Khanna, Suji Kim, Dong-Ho Lee, Fred Morstatter, Aram Galstyan, Xiang Ren

In this work, we aim to formulate a task, construct a dataset, and provide benchmarks for developing methods for event forecasting with large volumes of unstructured text data.

Knowledge Graphs Language Modelling +4

Teaching Machine Comprehension with Compositional Explanations

2 code implementations Findings of the Association for Computational Linguistics 2020 Qinyuan Ye, Xiao Huang, Elizabeth Boschee, Xiang Ren

Advances in machine reading comprehension (MRC) rely heavily on the collection of large scale human-annotated examples in the form of (question, paragraph, answer) triples.

Data Augmentation Machine Reading Comprehension

Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning

1 code implementation EMNLP 2020 Deren Lei, Gangrong Jiang, Xiaotao Gu, Kexuan Sun, Yuning Mao, Xiang Ren

Walk-based models have shown their advantages in knowledge graph (KG) reasoning by achieving decent performance while providing interpretable decisions.

reinforcement-learning

Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering

2 code implementations EMNLP 2020 Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, Xiang Ren

Existing work on augmenting question answering (QA) models with external knowledge (e. g., knowledge graphs) either struggle to model multi-hop relations efficiently, or lack transparency into the model's prediction rationale.

Knowledge Graphs Question Answering +1

TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition

1 code implementation ACL 2020 Bill Yuchen Lin, Dong-Ho Lee, Ming Shen, Ryan Moreno, Xiao Huang, Prashant Shiralkar, Xiang Ren

In this paper, we introduce "entity triggers," an effective proxy of human explanations for facilitating label-efficient learning of NER models.

named-entity-recognition NER

Generating Natural Language Adversarial Examples on a Large Scale with Generative Models

no code implementations10 Mar 2020 Yankun Ren, Jianbin Lin, Siliang Tang, Jun Zhou, Shuang Yang, Yuan Qi, Xiang Ren

It can attack text classification models with a higher success rate than existing methods, and provide acceptable quality for humans in the meantime.

Adversarial Text General Classification +3

Temporal Attribute Prediction via Joint Modeling of Multi-Relational Structure Evolution

1 code implementation9 Mar 2020 Sankalp Garg, Navodita Sharma, Woojeong Jin, Xiang Ren

We show that if the information contained in the graph and the time series data are closely related, then this inter-dependence can be used to predict the time series with improved accuracy.

Knowledge Graphs Link Prediction +3

Mining News Events from Comparable News Corpora: A Multi-Attribute Proximity Network Modeling Approach

no code implementations14 Nov 2019 Hyungsul Kim, Ahmed El-Kishky, Xiang Ren, Jiawei Han

This proximity network captures the corpus-level co-occurence statistics for candidate event descriptors, event attributes, as well as their connections.

News Summarization

Improving BERT Fine-tuning with Embedding Normalization

no code implementations10 Nov 2019 Wenxuan Zhou, Junyi Du, Xiang Ren

Large pre-trained sentence encoders like BERT start a new chapter in natural language processing.

Classification General Classification +2

CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning

2 code implementations Findings of the Association for Computational Linguistics 2020 Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, Xiang Ren

In this paper, we present a constrained text generation task, CommonGen associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning.

 Ranked #1 on Text Generation on CommonGen (CIDEr metric)

Common Sense Reasoning Question Answering +2

Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models

no code implementations ICLR 2020 Xisen Jin, Zhongyu Wei, Junyi Du, xiangyang xue, Xiang Ren

Human and metrics evaluation on both LSTM models and BERT Transformer models on multiple datasets show that our algorithms outperform prior hierarchical explanation algorithms.

Natural Language Processing Semantic Composition

Learning from Explanations with Neural Execution Tree

1 code implementation ICLR 2020 Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, Xiang Ren

While deep neural networks have achieved impressive performance on a range of NLP tasks, these data-hungry models heavily rely on labeled data, which restricts their applications in scenarios where data annotation is expensive.

Data Augmentation Multi-hop Question Answering +5

HMEAE: Hierarchical Modular Event Argument Extraction

1 code implementation IJCNLP 2019 Xiaozhi Wang, Ziqi Wang, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Maosong Sun, Jie zhou, Xiang Ren

Existing event extraction methods classify each argument role independently, ignoring the conceptual correlations between different argument roles.

Event Argument Extraction Event Extraction +1

SetExpan: Corpus-Based Set Expansion via Context Feature Selection and Rank Ensemble

1 code implementation17 Oct 2019 Jiaming Shen, Zeqiu Wu, Dongming Lei, Jingbo Shang, Xiang Ren, Jiawei Han

In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features.

feature selection Question Answering

Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling

1 code implementation ACL 2020 Ouyu Lan, Xiao Huang, Bill Yuchen Lin, He Jiang, Liyuan Liu, Xiang Ren

Its performance is largely influenced by the annotation quality and quantity in supervised learning scenarios, and obtaining ground truth labels is often costly.

Natural Language Processing

Recurrent Event Network : Global Structure Inference Over Temporal Knowledge Graph

no code implementations25 Sep 2019 Woojeong Jin, He Jiang, Meng Qu, Tong Chen, Changlin Zhang, Pedro Szekely, Xiang Ren

We present Recurrent Event Network (RE-Net), a novel autoregressive architecture for modeling temporal sequences of multi-relational graphs (e. g., temporal knowledge graph), which can perform sequential, global structure inference over future time stamps to predict new events.

Link Prediction

NERO: A Neural Rule Grounding Framework for Label-Efficient Relation Extraction

2 code implementations5 Sep 2019 Wenxuan Zhou, Hongtao Lin, Bill Yuchen Lin, Ziqi Wang, Junyi Du, Leonardo Neves, Xiang Ren

The soft matching module learns to match rules with semantically similar sentences such that raw corpora can be automatically labeled and leveraged by the RE module (in a much better coverage) as augmented supervision, in addition to the exactly matched sentences.

Relation Extraction

Learning Dynamic Context Augmentation for Global Entity Linking

2 code implementations IJCNLP 2019 Xiyuan Yang, Xiaotao Gu, Sheng Lin, Siliang Tang, Yueting Zhuang, Fei Wu, Zhigang Chen, Guoping Hu, Xiang Ren

Despite of the recent success of collective entity linking (EL) methods, these "global" inference methods may yield sub-optimal results when the "all-mention coherence" assumption breaks, and often suffer from high computational cost at the inference stage, due to the complex search space.

Entity Linking reinforcement-learning

KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning

2 code implementations IJCNLP 2019 Bill Yuchen Lin, Xinyue Chen, Jamin Chen, Xiang Ren

Commonsense reasoning aims to empower machines with the human ability to make presumptions about ordinary situations in our daily life.

Ranked #16 on Common Sense Reasoning on CommonsenseQA (using extra training data)

Common Sense Reasoning Knowledge Base Question Answering +2

Collaborative Policy Learning for Open Knowledge Graph Reasoning

2 code implementations IJCNLP 2019 Cong Fu, Tong Chen, Meng Qu, Woojeong Jin, Xiang Ren

We propose a novel reinforcement learning framework to train two collaborative agents jointly, i. e., a multi-hop graph reasoner and a fact extractor.

Facet-Aware Evaluation for Extractive Summarization

1 code implementation ACL 2020 Yuning Mao, Liyuan Liu, Qi Zhu, Xiang Ren, Jiawei Han

In this paper, we present a facet-aware evaluation setup for better assessment of the information coverage in extracted summaries.

Extractive Summarization Text Summarization

Hierarchical Text Classification with Reinforced Label Assignment

1 code implementation IJCNLP 2019 Yuning Mao, Jingjing Tian, Jiawei Han, Xiang Ren

While existing hierarchical text classification (HTC) methods attempt to capture label hierarchies for model training, they either make local decisions regarding each label or completely ignore the hierarchy information during inference.

 Ranked #1 on Text Classification on RCV1 (Macro F1 metric)

Classification General Classification +1

Raw-to-End Name Entity Recognition in Social Media

1 code implementation14 Aug 2019 Liyuan Liu, Zihan Wang, Jingbo Shang, Dandong Yin, Heng Ji, Xiang Ren, Shaowen Wang, Jiawei Han

Our model neither requires the conversion from character sequences to word sequences, nor assumes tokenizer can correctly detect all word boundaries.

named-entity-recognition NER

AlpacaTag: An Active Learning-based Crowd Annotation Framework for Sequence Tagging

no code implementations ACL 2019 Bill Yuchen Lin, Dong-Ho Lee, Frank F. Xu, Ouyu Lan, Xiang Ren

We introduce an open-source web-based data annotation framework (AlpacaTag) for sequence tagging tasks such as named-entity recognition (NER).

Active Learning named-entity-recognition +1

Cascade-BGNN: Toward Efficient Self-supervised Representation Learning on Large-scale Bipartite Graphs

1 code implementation27 Jun 2019 Chaoyang He, Tian Xie, Yu Rong, Wenbing Huang, Junzhou Huang, Xiang Ren, Cyrus Shahabi

Existing techniques either cannot be scaled to large-scale bipartite graphs that have limited labels or cannot exploit the unique structure of bipartite graphs, which have distinct node features in two domains.

Recommendation Systems Representation Learning

Eliciting Knowledge from Experts:Automatic Transcript Parsing for Cognitive Task Analysis

2 code implementations26 Jun 2019 Junyi Du, He Jiang, Jiaming Shen, Xiang Ren

To reduce human efforts and scale the process, automated CTA transcript parsing is desirable.

Relation Extraction

Dynamic Network Embedding via Incremental Skip-gram with Negative Sampling

1 code implementation9 Jun 2019 Hao Peng, Jian-Xin Li, Hao Yan, Qiran Gong, Senzhang Wang, Lin Liu, Lihong Wang, Xiang Ren

Most existing methods focus on learning the structural representations of vertices in a static network, but cannot guarantee an accurate and efficient embedding in a dynamic network scenario.

Link Prediction Multi-Label Classification +1

Characterizing and Forecasting User Engagement with In-app Action Graph: A Case Study of Snapchat

1 code implementation2 Jun 2019 Yozen Liu, Xiaolin Shi, Lucas Pierce, Xiang Ren

Here we propose to formalize individual user's in-app action transition patterns as a temporally evolving action graph, and analyze its characteristics in terms of informing future user engagement.

Time Series

Time-Series Event Prediction with Evolutionary State Graph

3 code implementations10 May 2019 Wenjie Hu, Yang Yang, Ziqiang Cheng, Carl Yang, Xiang Ren

In this paper, we present evolutionary state graph, a dynamic graph structure designed to systematically represent the evolving relations (edges) among states (nodes) along time.

Time Series Time Series Classification +1

Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction

1 code implementation IJCNLP 2019 Qinyuan Ye, Liyuan Liu, Maosen Zhang, Xiang Ren

In this paper, we study the problem what limits the performance of DS-trained neural models, conduct thorough analyses, and identify a factor that can influence the performance greatly, shifted label distribution.

Relation Extraction

Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs

2 code implementations11 Apr 2019 Woojeong Jin, Meng Qu, Xisen Jin, Xiang Ren

The task becomes more challenging on temporal knowledge graphs, where each fact is associated with a timestamp.

Knowledge Graphs Link Prediction +1

Jointly Learning Explainable Rules for Recommendation with Knowledge Graph

1 code implementation9 Mar 2019 Weizhi Ma, Min Zhang, Yue Cao, Woojeong, Jin, Chenyang Wang, Yiqun Liu, Shaoping Ma, Xiang Ren

The framework encourages two modules to complement each other in generating effective and explainable recommendation: 1) inductive rules, mined from item-centric knowledge graphs, summarize common multi-hop relational patterns for inferring different item associations and provide human-readable explanation for model prediction; 2) recommendation module can be augmented by induced rules and thus have better generalization ability dealing with the cold-start issue.

Knowledge Graphs Recommendation Systems

Learning Dual Retrieval Module for Semi-supervised Relation Extraction

1 code implementation20 Feb 2019 Hongtao Lin, Jun Yan, Meng Qu, Xiang Ren

In this paper, we leverage a key insight that retrieving sentences expressing a relation is a dual task of predicting relation label for a given sentence---two tasks are complementary to each other and can be optimized jointly for mutual enhancement.

MULTI-VIEW LEARNING Relation Extraction

Cross-relation Cross-bag Attention for Distantly-supervised Relation Extraction

1 code implementation27 Dec 2018 Yujin Yuan, Liyuan Liu, Siliang Tang, Zhongfei Zhang, Yueting Zhuang, ShiLiang Pu, Fei Wu, Xiang Ren

Distant supervision leverages knowledge bases to automatically label instances, thus allowing us to train relation extractor without human annotations.

Relation Extraction

Mining Entity Synonyms with Efficient Neural Set Generation

1 code implementation16 Nov 2018 Jiaming Shen, Ruiliang Lyu, Xiang Ren, Michelle Vanni, Brian Sadler, Jiawei Han

Mining entity synonym sets (i. e., sets of terms referring to the same entity) is an important task for many entity-leveraging applications.

End-to-End Hierarchical Text Classification with Label Assignment Policy

no code implementations27 Sep 2018 Yuning Mao, Jingjing Tian, Jiawei Han, Xiang Ren

We present an end-to-end reinforcement learning approach to hierarchical text classification where documents are labeled by placing them at the right positions in a given hierarchy.

Classification Text Classification

Learning Named Entity Tagger using Domain-Specific Dictionary

1 code implementation EMNLP 2018 Jingbo Shang, Liyuan Liu, Xiang Ren, Xiaotao Gu, Teng Ren, Jiawei Han

Recent advances in deep neural models allow us to build reliable named entity recognition (NER) systems without handcrafting features.

named-entity-recognition NER

Hierarchical Graph Representation Learning with Differentiable Pooling

12 code implementations NeurIPS 2018 Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, Jure Leskovec

Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction.

General Classification Graph Classification +3

Scalable Construction and Reasoning of Massive Knowledge Bases

no code implementations NAACL 2018 Xiang Ren, Nanyun Peng, William Yang Wang

In today{'}s information-based society, there is abundant knowledge out there carried in the form of natural language texts (e. g., news articles, social media posts, scientific publications), which spans across various domains (e. g., corporate documents, advertisements, legal acts, medical reports), which grows at an astonishing rate.

End-to-End Reinforcement Learning for Automatic Taxonomy Induction

1 code implementation ACL 2018 Yuning Mao, Xiang Ren, Jiaming Shen, Xiaotao Gu, Jiawei Han

We present a novel end-to-end reinforcement learning approach to automatic taxonomy induction from a set of terms.

reinforcement-learning

Integrating Local Context and Global Cohesiveness for Open Information Extraction

1 code implementation26 Apr 2018 Qi Zhu, Xiang Ren, Jingbo Shang, Yu Zhang, Ahmed El-Kishky, Jiawei Han

However, current Open IE systems focus on modeling local context information in a sentence to extract relation tuples, while ignoring the fact that global statistics in a large corpus can be collectively leveraged to identify high-quality sentence-level extractions.

Open Information Extraction

Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling

1 code implementation EMNLP 2018 Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, Jiawei Han

Many efforts have been made to facilitate natural language processing tasks with pre-trained language models (LMs), and brought significant improvements to various applications.

Language Modelling Named Entity Recognition +1

GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models

2 code implementations ICML 2018 Jiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, Jure Leskovec

Modeling and generating graphs is fundamental for studying networks in biology, engineering, and social sciences.

Graph Generation

Cross-type Biomedical Named Entity Recognition with Deep Multi-Task Learning

2 code implementations30 Jan 2018 Xuan Wang, Yu Zhang, Xiang Ren, Yuhao Zhang, Marinka Zitnik, Jingbo Shang, Curtis Langlotz, Jiawei Han

Motivation: State-of-the-art biomedical named entity recognition (BioNER) systems often require handcrafted features specific to each entity type, such as genes, chemicals and diseases.

Feature Engineering Multi-Task Learning +2

Indirect Supervision for Relation Extraction using Question-Answer Pairs

2 code implementations30 Oct 2017 Zeqiu Wu, Xiang Ren, Frank F. Xu, Ji Li, Jiawei Han

However, due to the incompleteness of knowledge bases and the context-agnostic labeling, the training data collected via distant supervision (DS) can be very noisy.

Question Answering Relation Extraction

An Attention-based Collaboration Framework for Multi-View Network Representation Learning

1 code implementation19 Sep 2017 Meng Qu, Jian Tang, Jingbo Shang, Xiang Ren, Ming Zhang, Jiawei Han

Existing approaches usually study networks with a single type of proximity between nodes, which defines a single view of a network.

Representation Learning

Empower Sequence Labeling with Task-Aware Neural Language Model

3 code implementations13 Sep 2017 Liyuan Liu, Jingbo Shang, Frank F. Xu, Xiang Ren, Huan Gui, Jian Peng, Jiawei Han

In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task.

Language Modelling named-entity-recognition +4

Heterogeneous Supervision for Relation Extraction: A Representation Learning Approach

1 code implementation EMNLP 2017 Liyuan Liu, Xiang Ren, Qi Zhu, Shi Zhi, Huan Gui, Heng Ji, Jiawei Han

These annotations, referred as heterogeneous supervision, often conflict with each other, which brings a new challenge to the original relation extraction task: how to infer the true label from noisy labels for a given instance.

Relation Extraction Representation Learning

Automatic Synonym Discovery with Knowledge Bases

1 code implementation25 Jun 2017 Meng Qu, Xiang Ren, Jiawei Han

In this paper, we study the problem of automatic synonym discovery with knowledge bases, that is, identifying synonyms for knowledge base entities in a given domain-specific corpus.

MetaPAD: Meta Pattern Discovery from Massive Text Corpora

no code implementations13 Mar 2017 Meng Jiang, Jingbo Shang, Taylor Cassidy, Xiang Ren, Lance M. Kaplan, Timothy P. Hanratty, Jiawei Han

We propose an efficient framework, called MetaPAD, which discovers meta patterns from massive corpora with three techniques: (1) it develops a context-aware segmentation method to carefully determine the boundaries of patterns with a learnt pattern quality assessment function, which avoids costly dependency parsing and generates high-quality patterns; (2) it identifies and groups synonymous meta patterns from multiple facets---their types, contexts, and extractions; and (3) it examines type distributions of entities in the instances extracted by each group of patterns, and looks for appropriate type levels to make discovered patterns precise.

Dependency Parsing

Automated Phrase Mining from Massive Text Corpora

4 code implementations15 Feb 2017 Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R. Voss, Jiawei Han

As one of the fundamental tasks in text analysis, phrase mining aims at extracting quality phrases from a text corpus.

POS

CoType: Joint Extraction of Typed Entities and Relations with Knowledge Bases

2 code implementations27 Oct 2016 Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Tarek F. Abdelzaher, Jiawei Han

We propose a novel domain-independent framework, called CoType, that runs a data-driven text segmentation algorithm to extract entity mentions, and jointly embeds entity mentions, relation mentions, text features and type labels into two low-dimensional spaces (for entity and relation mentions respectively), where, in each space, objects whose types are close will also have similar representations.

Joint Entity and Relation Extraction Text Segmentation

Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label Embedding

3 code implementations17 Feb 2016 Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Jiawei Han

Current systems of fine-grained entity typing use distant supervision in conjunction with existing knowledge bases to assign categories (type labels) to entity mentions.

Entity Typing Semantic Similarity +1

Cannot find the paper you are looking for? You can Submit a new open access paper.