Search Results for author: Xiang Ren

Found 171 papers, 100 papers with code

Knowledge-Augmented Methods for Natural Language Processing

no code implementations ACL 2022 Chenguang Zhu, Yichong Xu, Xiang Ren, Bill Lin, Meng Jiang, Wenhao Yu

Knowledge in natural language processing (NLP) has been a rising trend especially after the advent of large scale pre-trained models.

Text Generation

Using Word Embedding to Reveal Monetary Policy Explanation Changes

no code implementations EMNLP (ECONLP) 2021 Akira Matsui, Xiang Ren, Emilio Ferrara

Documents have been an essential tool of communication for governments to announce their policy operations.

Sentiment Analysis

Modality-specific Distillation

no code implementations NAACL (maiworkshop) 2021 Woojeong Jin, Maziar Sanjabi, Shaoliang Nie, Liang Tan, Xiang Ren, Hamed Firooz

In this paper, we propose modality-specific distillation (MSD) to effectively transfer knowledge from a teacher on multimodal datasets.

Knowledge Distillation Meta-Learning

ER-TEST Evaluating Explanation Regularization Methods for NLP Models

no code implementations NAACL (TrustNLP) 2022 Brihi Joshi, Aaron Chan, Ziyi Liu, Xiang Ren

For the latter, explanation regularization (ER) aims to improve NLM generalization by pushing the machine rationales to align with human rationales.

Virtual Prompt Injection for Instruction-Tuned Large Language Models

no code implementations31 Jul 2023 Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang, Vijay Srinivasan, Xiang Ren, Hongxia Jin

For instance, if an LLM is compromised with the virtual prompt "Describe Joe Biden negatively."

Instruction-following Evaluation through Verbalizer Manipulation

no code implementations20 Jul 2023 Shiyang Li, Jun Yan, Hai Wang, Zheng Tang, Xiang Ren, Vijay Srinivasan, Hongxia Jin

We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them.

Instruction Following

LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion

no code implementations5 Jun 2023 Dongfu Jiang, Xiang Ren, Bill Yuchen Lin

We present LLM-Blender, an ensembling framework designed to attain consistently superior performance by leveraging the diverse strengths of multiple open-source large language models (LLMs).

Faith and Fate: Limits of Transformers on Compositionality

no code implementations29 May 2023 Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi

We formulate compositional tasks as computation graphs to systematically quantify the level of complexity, and break down reasoning steps into intermediate sub-procedures.

SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks

1 code implementation27 May 2023 Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, Xiang Ren

The Swift module is a small encoder-decoder LM fine-tuned on the oracle agent's action trajectories, while the Sage module employs LLMs such as GPT-4 for subgoal planning and grounding.

How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench

no code implementations24 May 2023 Qinyuan Ye, Harvey Yiyun Fu, Xiang Ren, Robin Jia

We investigate the predictability of large language model (LLM) capabilities: given records of past experiments using different model families, numbers of parameters, tasks, and numbers of in-context examples, can we accurately predict LLM performance on new experiment configurations?

Language Modelling Large Language Model

Estimating Large Language Model Capabilities without Labeled Test Data

no code implementations24 May 2023 Harvey Yiyun Fu, Qinyuan Ye, Albert Xu, Xiang Ren, Robin Jia

In this paper, we propose the task of ICL accuracy estimation, in which we predict the accuracy of an LLM when doing in-context learning on a new task given only unlabeled data for that task.

Language Modelling Large Language Model

Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

1 code implementation11 May 2023 Brihi Joshi, Ziyi Liu, Sahana Ramnath, Aaron Chan, Zhewei Tong, Shaoliang Nie, Qifan Wang, Yejin Choi, Xiang Ren

Existing metrics like task performance of the LM generating the rationales, or similarity between generated and gold rationales are not good indicators of their human utility.

SCOTT: Self-Consistent Chain-of-Thought Distillation

1 code implementation3 May 2023 Peifeng Wang, Zhengyang Wang, Zheng Li, Yifan Gao, Bing Yin, Xiang Ren

While CoT can yield dramatically improved performance, such gains are only observed for sufficiently large LMs.

Knowledge Distillation

Design of Reconfigurable Intelligent Surfaces for Wireless Communication: A Review

no code implementations27 Apr 2023 Rujing Xiong, Jianan Zhang, Junshuo Liu, Fuhai Wang, Zhengyu Wang, Jialong Lu, Xiang Ren, Kai Wan, Tiebin Mi, Robert Caiming Qiu

Existing literature reviews predominantly focus on the theoretical aspects of reconfigurable intelligent surfaces (RISs), such as algorithms and models, while neglecting a thorough examination of the associated hardware components.

Exploring Distributional Shifts in Large Language Models for Code Analysis

no code implementations16 Mar 2023 Shushan Arakelyan, Rocktim Jyoti Das, Yi Mao, Xiang Ren

We find that in the case of code generation, a model adapted to multiple domains simultaneously performs on par with those adapted to each domain individually.

Code Generation Code Summarization

Dataless Knowledge Fusion by Merging Weights of Language Models

1 code implementation19 Dec 2022 Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, Pengxiang Cheng

In this paper, we study the problem of merging individual models built on different training data sets to obtain a single model that performs well both across all data set domains and can generalize on out-of-domain data.

Multi-Task Learning

APOLLO: A Simple Approach for Adaptive Pretraining of Language Models for Logical Reasoning

no code implementations19 Dec 2022 Soumya Sanyal, Yichong Xu, Shuohang Wang, ZiYi Yang, Reid Pryzant, Wenhao Yu, Chenguang Zhu, Xiang Ren

Logical reasoning of text is an important ability that requires understanding the information present in the text, their interconnections, and then reasoning through them to infer new conclusions.

Data Augmentation Language Modelling +2

KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales

no code implementations19 Dec 2022 Aaron Chan, Zhiyuan Zeng, Wyatt Lake, Brihi Joshi, Hanjie Chen, Xiang Ren

First, KNIFE finetunes a teacher LM (given task input and FTR) to predict the task output, transferring reasoning knowledge from the FTRs to the teacher's hidden states.

Knowledge Distillation Language Modelling +1

Contrastive Novelty-Augmented Learning: Anticipating Outliers with Large Language Models

1 code implementation28 Nov 2022 Albert Xu, Xiang Ren, Robin Jia

In many task settings, text classification models are likely to encounter examples from novel classes on which they cannot predict correctly.

Language Modelling Large Language Model +2

Reflect, Not Reflex: Inference-Based Common Ground Improves Dialogue Response Quality

no code implementations16 Nov 2022 Pei Zhou, Hyundong Cho, Pegah Jandaghi, Dong-Ho Lee, Bill Yuchen Lin, Jay Pujara, Xiang Ren

Human communication relies on common ground (CG), the mutual knowledge and beliefs shared by participants, to produce coherent and interesting conversations.

Response Generation

PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales

1 code implementation3 Nov 2022 Peifeng Wang, Aaron Chan, Filip Ilievski, Muhao Chen, Xiang Ren

Neural language models (LMs) have achieved impressive results on various language-based reasoning tasks by utilizing latent knowledge encoded in their own pretrained parameters.

Decision Making

XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models

no code implementations30 Oct 2022 Dong-Ho Lee, Akshen Kadakia, Brihi Joshi, Aaron Chan, Ziyi Liu, Kiran Narahari, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, Xiang Ren

Explanation-based model debugging aims to resolve spurious biases by showing human users explanations of model behavior, asking users to give feedback on the behavior, then using the feedback to update the model.

text-classification Text Classification

MMGA: Multimodal Learning with Graph Alignment

no code implementations18 Oct 2022 Xuan Yang, Quanjin Tao, Xiao Feng, Donghong Cai, Xiang Ren, Yang Yang

In this paper, we propose MMGA (Multimodal learning with Graph Alignment), a novel multimodal pre-training framework to incorporate information from graph (social network), image and text modalities on social media to enhance user representation learning.

Representation Learning

REV: Information-Theoretic Evaluation of Free-Text Rationales

1 code implementation10 Oct 2022 Hanjie Chen, Faeze Brahman, Xiang Ren, Yangfeng Ji, Yejin Choi, Swabha Swayamdipta

More concretely, we propose a metric called REV (Rationale Evaluation with conditional V-information), to quantify the amount of new, label-relevant information in a rationale beyond the information already available in the input or the label.

On Grounded Planning for Embodied Tasks with Language Models

no code implementations29 Aug 2022 Bill Yuchen Lin, Chengsong Huang, Qian Liu, Wenda Gu, Sam Sommerer, Xiang Ren

Language models (LMs) have demonstrated their capability in possessing commonsense knowledge of the physical world, a crucial aspect of performing tasks in everyday life.

Curriculum Learning for Data-Efficient Vision-Language Alignment

no code implementations29 Jul 2022 Tejas Srinivasan, Xiang Ren, Jesse Thomason

Aligning image and text encoders from scratch using contrastive learning requires large amounts of paired image-text data.

Contrastive Learning Image Retrieval +2

Retweet-BERT: Political Leaning Detection Using Language Features and Information Diffusion on Social Networks

1 code implementation18 Jul 2022 Julie Jiang, Xiang Ren, Emilio Ferrara

We introduce Retweet-BERT, a simple and scalable model to estimate the political leanings of Twitter users.

NewsEdits: A News Article Revision Dataset and a Document-Level Reasoning Challenge

1 code implementation14 Jun 2022 Alexander Spangher, Xiang Ren, Jonathan May, Nanyun Peng

News article revision histories provide clues to narrative and factual evolution in news articles.

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

1 code implementation9 Jun 2022 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Common Sense Reasoning Memorization

RobustLR: Evaluating Robustness to Logical Perturbation in Deductive Reasoning

1 code implementation25 May 2022 Soumya Sanyal, Zeyi Liao, Xiang Ren

Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in English natural language.

Logical Reasoning

Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts

1 code implementation25 May 2022 Qinyuan Ye, Juan Zha, Xiang Ren

Recent works suggest that transformer models are capable of multi-tasking on diverse NLP tasks and adapting to new tasks efficiently.

Multi-Task Learning

BITE: Textual Backdoor Attacks with Iterative Trigger Injection

1 code implementation25 May 2022 Jun Yan, Vansh Gupta, Xiang Ren

We propose BITE, a backdoor attack that poisons the training data to establish strong correlations between the target label and a set of "trigger words".

Backdoor Attack Hate Speech Detection +3

Machine Translation Robustness to Natural Asemantic Variation

1 code implementation25 May 2022 Jacob Bremerman, Xiang Ren, Jonathan May

We find that existing MT models fail when presented with NAV data, but we demonstrate strategies to improve performance on NAV by fine-tuning them with human-generated variations.

Machine Translation Translation

Cross-lingual Lifelong Learning

1 code implementation23 May 2022 Meryem M'hamdi, Xiang Ren, Jonathan May

The longstanding goal of multi-lingual learning has been to develop a universal cross-lingual model that can withstand the changes in multi-lingual data distributions.

Continual Learning Transfer Learning

NS3: Neuro-Symbolic Semantic Code Search

1 code implementation21 May 2022 Shushan Arakelyan, Anna Hakhverdyan, Miltiadis Allamanis, Luis Garcia, Christophe Hauser, Xiang Ren

We compare our model - NS3 (Neuro-Symbolic Semantic Search) - to a number of baselines, including state-of-the-art semantic code retrieval methods, and evaluate on two datasets - CodeSearchNet and Code Search and Question Answering.

Code Search Question Answering +1

On Continual Model Refinement in Out-of-Distribution Data Streams

no code implementations ACL 2022 Bill Yuchen Lin, Sida Wang, Xi Victoria Lin, Robin Jia, Lin Xiao, Xiang Ren, Wen-tau Yih

Real-world natural language processing (NLP) models need to be continually updated to fix the prediction errors in out-of-distribution (OOD) data streams while overcoming catastrophic forgetting.

Benchmarking Continual Learning

Unsupervised Cross-Task Generalization via Retrieval Augmentation

1 code implementation17 Apr 2022 Bill Yuchen Lin, Kangmin Tan, Chris Miller, Beiwen Tian, Xiang Ren

Humans can perform unseen tasks by recalling relevant skills acquired previously and then generalizing them to the target tasks, even if there is no supervision at all.

Retrieval

FaiRR: Faithful and Robust Deductive Reasoning over Natural Language

1 code implementation ACL 2022 Soumya Sanyal, Harman Singh, Xiang Ren

Recent works show that such models can also produce the reasoning steps (i. e., the proof graph) that emulate the model's logical reasoning process.

Fact Selection Logical Reasoning

Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-modal Knowledge Transfer

no code implementations ACL 2022 Woojeong Jin, Dong-Ho Lee, Chenguang Zhu, Jay Pujara, Xiang Ren

Pre-trained language models are still far from human performance in tasks that need understanding of properties (e. g. appearance, measurable quantity) and affordances of everyday objects in the real world since the text lacks such information due to reporting bias.

Image Captioning Language Modelling +1

UNIREX: A Unified Learning Framework for Language Model Rationale Extraction

1 code implementation BigScience (ACL) 2022 Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, Hamed Firooz

An extractive rationale explains a language model's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction.

Language Modelling text-classification +1

Contextualized Scene Imagination for Generative Commonsense Reasoning

1 code implementation ICLR 2022 Peifeng Wang, Jonathan Zamora, Junfeng Liu, Filip Ilievski, Muhao Chen, Xiang Ren

In this paper, we propose an Imagine-and-Verbalize (I&V) method, which learns to imagine a relational scene knowledge graph (SKG) with relations between the input concepts, and leverage the SKG as a constraint when generating a plausible scene description.

Common Sense Reasoning Descriptive +1

Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models

1 code implementation NAACL 2022 Qinyuan Ye, Madian Khabsa, Mike Lewis, Sinong Wang, Xiang Ren, Aaron Jaech

Distilling state-of-the-art transformer models into lightweight student models is an effective way to reduce computation cost at inference time.

Domain Generalization Privacy Preserving +3

On the Robustness of Reading Comprehension Models to Entity Renaming

1 code implementation NAACL 2022 Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, Xiang Ren

We study the robustness of machine reading comprehension (MRC) models to entity renaming -- do models make more wrong predictions when the same questions are asked about an entity whose name has been changed?

Continual Pretraining Machine Reading Comprehension

KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering

no code implementations ACL 2022 Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, Michael Zeng

The recent proposed Fusion-in-Decoder (FiD), which is built on top of the pretrained generative model T5, achieves the state-of-the-art performance in the reading module.

Answer Generation Open-Domain Question Answering +3

AutoTriggER: Label-Efficient and Robust Named Entity Recognition with Auxiliary Trigger Extraction

no code implementations10 Sep 2021 Dong-Ho Lee, Ravi Kiran Selvam, Sheikh Muhammad Sarwar, Bill Yuchen Lin, Fred Morstatter, Jay Pujara, Elizabeth Boschee, James Allan, Xiang Ren

Deep neural models for named entity recognition (NER) have shown impressive results in overcoming label scarcity and generalizing to unseen entities by leveraging distant supervision and auxiliary information such as explanations.

Low Resource Named Entity Recognition named-entity-recognition +2

Discretized Integrated Gradients for Explaining Language Models

2 code implementations EMNLP 2021 Soumya Sanyal, Xiang Ren

As a prominent attribution-based explanation algorithm, Integrated Gradients (IG) is widely adopted due to its desirable explanation axioms and the ease of gradient computation.

Feature Importance Sentiment Analysis +1

Improving Counterfactual Generation for Fair Hate Speech Detection

no code implementations ACL (WOAH) 2021 Aida Mostafazadeh Davani, Ali Omrani, Brendan Kennedy, Mohammad Atari, Xiang Ren, Morteza Dehghani

By applying logit pairing to equalize outcomes on the restricted set of counterfactuals for each instance, we improve fairness metrics while preserving model performance on hate speech detection.

Fairness Hate Speech Detection

Do Language Models Perform Generalizable Commonsense Inference?

1 code implementation Findings (ACL) 2021 Peifeng Wang, Filip Ilievski, Muhao Chen, Xiang Ren

Inspired by evidence that pretrained language models (LMs) encode commonsense knowledge, recent work has applied LMs to automatically populate commonsense knowledge graphs (CKGs).

Knowledge Graphs

Common Sense Beyond English: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning

1 code implementation ACL 2021 Bill Yuchen Lin, Seyeon Lee, Xiaoyang Qiao, Xiang Ren

In addition, we also create two new datasets, X-CSQA and X-CODAH, by translating their English versions to 15 other languages, so that we can evaluate popular ML-LMs for cross-lingual commonsense reasoning.

Common Sense Reasoning

Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning

1 code implementation Findings (EMNLP) 2021 Xisen Jin, Bill Yuchen Lin, Mohammad Rostami, Xiang Ren

The ability to continuously expand knowledge over time and utilize it to rapidly generalize to new tasks is a key feature of human linguistic intelligence.

Continual Learning Few-Shot Learning +2

Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation

1 code implementation EMNLP 2021 Mozhdeh Gheini, Xiang Ren, Jonathan May

We study the power of cross-attention in the Transformer architecture within the context of transfer learning for machine translation, and extend the findings of studies into cross-attention when training from scratch.

Machine Translation Transfer Learning +1

CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP

3 code implementations EMNLP 2021 Qinyuan Ye, Bill Yuchen Lin, Xiang Ren

Humans can learn a new language task efficiently with only few examples, by leveraging their knowledge obtained when learning prior tasks.

Few-Shot Learning

FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks

1 code implementation Findings (NAACL) 2022 Bill Yuchen Lin, Chaoyang He, Zihang Zeng, Hulin Wang, Yufen Huang, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, Salman Avestimehr

Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for natural language processing (NLP) tasks.

Benchmarking Federated Learning +5

Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation

2 code implementations EMNLP 2021 Yuning Mao, Wenchang Ma, Deren Lei, Jiawei Han, Xiang Ren

In this paper, we present a systematic analysis that studies whether current seq2seq models, especially pre-trained language models, are good enough for preserving important input concepts and to what extent explicitly guiding generation with the concepts as lexical constraints is beneficial.

Conditional Text Generation Denoising

Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources

no code implementations EMNLP 2021 Ninareh Mehrabi, Pei Zhou, Fred Morstatter, Jay Pujara, Xiang Ren, Aram Galstyan

In addition, we analyze two downstream models that use ConceptNet as a source for commonsense knowledge and find the existence of biases in those models as well.

Refining Language Models with Compositional Explanations

1 code implementation NeurIPS 2021 Huihan Yao, Ying Chen, Qinyuan Ye, Xisen Jin, Xiang Ren

However, such a regularization technique lacks flexibility and coverage, since only importance scores towards a pre-defined list of features are adjusted, while more complex human knowledge such as feature interaction and pattern generalization can hardly be incorporated.

Fairness Language Modelling +2

Learning to Generate Task-Specific Adapters from Task Description

1 code implementation ACL 2021 Qinyuan Ye, Xiang Ren

Recent study further shows that they can learn to generalize to novel tasks, by including task descriptions as part of the source sequence and training the model with (source, target) examples.

Text Generation Zero-Shot Learning

Learning Contextualized Knowledge Graph Structures for Commonsense Reasoning

no code implementations1 Jan 2021 Jun Yan, Mrigank Raman, Tianyu Zhang, Ryan Rossi, Handong Zhao, Sungchul Kim, Nedim Lipka, Xiang Ren

Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense) question answering and natural language inference.

Knowledge Graphs Natural Language Inference +1

Pre-training Text-to-Text Transformers to Write and Reason with Concepts

no code implementations ICLR 2021 Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Xiang Ren

To augment PTLMs with common sense, we propose generative and contrastive objectives as intermediate self-supervised pre-training tasks between general pre-training and downstream task-specific fine-tuning.

Common Sense Reasoning Language Modelling +2

Efficient Learning of Less Biased Models with Transfer Learning

no code implementations1 Jan 2021 Xisen Jin, Francesco Barbieri, Leonardo Neves, Xiang Ren

Prediction bias in machine learning models, referring to undesirable model behaviors that discriminates inputs mentioning or produced by certain group, has drawn increasing attention from the research community given its societal impact.

Transfer Learning

ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning

2 code implementations EMNLP 2021 Rujun Han, Xiang Ren, Nanyun Peng

While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications.

Continual Pretraining Language Modelling +4

On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning

no code implementations NAACL 2021 Xisen Jin, Francesco Barbieri, Brendan Kennedy, Aida Mostafazadeh Davani, Leonardo Neves, Xiang Ren

Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution.

coreference-resolution Fairness +6

Fair Hate Speech Detection through Evaluation of Social Group Counterfactuals

no code implementations24 Oct 2020 Aida Mostafazadeh Davani, Ali Omrani, Brendan Kennedy, Mohammad Atari, Xiang Ren, Morteza Dehghani

Counterfactual token fairness for a mentioned social group evaluates the model's predictions as to whether they are the same for (a) the actual sentence and (b) a counterfactual instance, which is generated by changing the mentioned social group in the sentence.

Fairness Hate Speech Detection

Differentiable Open-Ended Commonsense Reasoning

no code implementations NAACL 2021 Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, William W. Cohen

As a step towards making commonsense reasoning research more realistic, we propose to study open-ended commonsense reasoning (OpenCSR) -- the task of answering a commonsense question without any pre-defined choices -- using as a resource only a corpus of commonsense facts written in natural language.

Multiple-choice

Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation

2 code implementations24 Oct 2020 Yuning Mao, Xiang Ren, Heng Ji, Jiawei Han

Despite significant progress, state-of-the-art abstractive summarization methods are still prone to hallucinate content inconsistent with the source document.

Abstractive Text Summarization Keyphrase Extraction

Pre-training Text-to-Text Transformers for Concept-centric Common Sense

1 code implementation24 Oct 2020 Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Bill Yuchen Lin, Xiang Ren

Pre-trained language models (PTLM) have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks.

Common Sense Reasoning Knowledge Graphs +3

One-shot Learning for Temporal Knowledge Graphs

no code implementations AKBC 2021 Mehrnoosh Mirtaheri, Mohammad Rostami, Xiang Ren, Fred Morstatter, Aram Galstyan

Most real-world knowledge graphs are characterized by a long-tail relation frequency distribution where a significant fraction of relations occurs only a handful of times.

Knowledge Graphs Link Prediction +1

Will This Idea Spread Beyond Academia? Understanding Knowledge Transfer of Scientific Concepts across Text Corpora

no code implementations Findings of the Association for Computational Linguistics 2020 Hancheng Cao, Mengjie Cheng, Zhepeng Cen, Daniel A. McFarland, Xiang Ren

We extract scientific concepts (i. e., phrases) from corpora as instantiations of "research ideas", create concept-level features as motivated by literature, and then follow the trajectories of over 450, 000 new concepts (emerged from 1995-2014) to identify factors that lead only a small proportion of these ideas to be used in inventions and drug trials.

Transfer Learning

SynSetExpan: An Iterative Framework for Joint Entity Set Expansion and Synonym Discovery

no code implementations EMNLP 2020 Jiaming Shen, Wenda Qiu, Jingbo Shang, Michelle Vanni, Xiang Ren, Jiawei Han

To facilitate the research on studying the interplays of these two tasks, we create the first large-scale Synonym-Enhanced Set Expansion (SE2) dataset via crowdsourcing.

Two Step Joint Model for Drug Drug Interaction Extraction

no code implementations28 Aug 2020 Siliang Tang, Qi Zhang, Tianpeng Zheng, Mengdi Zhou, Zhan Chen, Lixing Shen, Xiang Ren, Yueting Zhuang, ShiLiang Pu, Fei Wu

When patients need to take medicine, particularly taking more than one kind of drug simultaneously, they should be alarmed that there possibly exists drug-drug interaction.

Drug–drug Interaction Extraction named-entity-recognition +4

Gradient-based Editing of Memory Examples for Online Task-free Continual Learning

1 code implementation NeurIPS 2021 Xisen Jin, Arka Sadhu, Junyi Du, Xiang Ren

We explore task-free continual learning (CL), in which a model is trained to avoid catastrophic forgetting in the absence of explicit task boundaries or identities.

Continual Learning

Screenplay Quality Assessment: Can We Predict Who Gets Nominated?

no code implementations WS 2020 Ming-Chang Chiu, Tiantian Feng, Xiang Ren, Shrikanth Narayanan

Toward that goal, in this work, we present a method to evaluate the quality of a screenplay based on linguistic cues.

Contextualizing Hate Speech Classifiers with Post-hoc Explanation

3 code implementations ACL 2020 Brendan Kennedy, Xisen Jin, Aida Mostafazadeh Davani, Morteza Dehghani, Xiang Ren

Hate speech classifiers trained on imbalanced datasets struggle to determine if group identifiers like "gay" or "black" are used in offensive or prejudiced ways.

Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models

no code implementations EMNLP 2020 Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, Xiang Ren

Recent works show that pre-trained language models (PTLMs), such as BERT, possess certain commonsense and factual knowledge.

RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms

no code implementations EMNLP 2021 Pei Zhou, Rahul Khanna, Seyeon Lee, Bill Yuchen Lin, Daniel Ho, Jay Pujara, Xiang Ren

Pre-trained language models (PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated.

IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization

1 code implementation2 May 2020 Wenxuan Zhou, Bill Yuchen Lin, Xiang Ren

Fine-tuning pre-trained language models (PTLMs), such as BERT and its better variant RoBERTa, has been a common practice for advancing performance in natural language understanding (NLU) tasks.

Natural Language Understanding Representation Learning

ForecastQA: A Question Answering Challenge for Event Forecasting with Temporal Text Data

no code implementations ACL 2021 Woojeong Jin, Rahul Khanna, Suji Kim, Dong-Ho Lee, Fred Morstatter, Aram Galstyan, Xiang Ren

In this work, we aim to formulate a task, construct a dataset, and provide benchmarks for developing methods for event forecasting with large volumes of unstructured text data.

Knowledge Graphs Language Modelling +5

Visually Grounded Continual Learning of Compositional Phrases

2 code implementations EMNLP 2020 Xisen Jin, Junyi Du, Arka Sadhu, Ram Nevatia, Xiang Ren

To study this human-like language acquisition ability, we present VisCOLL, a visually grounded language learning task, which simulates the continual acquisition of compositional phrases from streaming visual scenes.

Continual Learning Grounded language learning +1

Teaching Machine Comprehension with Compositional Explanations

2 code implementations Findings of the Association for Computational Linguistics 2020 Qinyuan Ye, Xiao Huang, Elizabeth Boschee, Xiang Ren

Advances in machine reading comprehension (MRC) rely heavily on the collection of large scale human-annotated examples in the form of (question, paragraph, answer) triples.

Data Augmentation Machine Reading Comprehension

Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering

2 code implementations EMNLP 2020 Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, Xiang Ren

Existing work on augmenting question answering (QA) models with external knowledge (e. g., knowledge graphs) either struggle to model multi-hop relations efficiently, or lack transparency into the model's prediction rationale.

Knowledge Graphs Question Answering +2

Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning

1 code implementation EMNLP 2020 Deren Lei, Gangrong Jiang, Xiaotao Gu, Kexuan Sun, Yuning Mao, Xiang Ren

Walk-based models have shown their advantages in knowledge graph (KG) reasoning by achieving decent performance while providing interpretable decisions.

reinforcement-learning Reinforcement Learning (RL)

Generating Natural Language Adversarial Examples on a Large Scale with Generative Models

no code implementations10 Mar 2020 Yankun Ren, Jianbin Lin, Siliang Tang, Jun Zhou, Shuang Yang, Yuan Qi, Xiang Ren

It can attack text classification models with a higher success rate than existing methods, and provide acceptable quality for humans in the meantime.

Adversarial Text General Classification +4

Temporal Attribute Prediction via Joint Modeling of Multi-Relational Structure Evolution

1 code implementation9 Mar 2020 Sankalp Garg, Navodita Sharma, Woojeong Jin, Xiang Ren

We show that if the information contained in the graph and the time series data are closely related, then this inter-dependence can be used to predict the time series with improved accuracy.

Knowledge Graphs Link Prediction +3

Mining News Events from Comparable News Corpora: A Multi-Attribute Proximity Network Modeling Approach

no code implementations14 Nov 2019 Hyungsul Kim, Ahmed El-Kishky, Xiang Ren, Jiawei Han

This proximity network captures the corpus-level co-occurence statistics for candidate event descriptors, event attributes, as well as their connections.

News Summarization

Improving BERT Fine-tuning with Embedding Normalization

no code implementations10 Nov 2019 Wenxuan Zhou, Junyi Du, Xiang Ren

Large pre-trained sentence encoders like BERT start a new chapter in natural language processing.

General Classification text-classification +1

Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models

2 code implementations ICLR 2020 Xisen Jin, Zhongyu Wei, Junyi Du, xiangyang xue, Xiang Ren

Human and metrics evaluation on both LSTM models and BERT Transformer models on multiple datasets show that our algorithms outperform prior hierarchical explanation algorithms.

Semantic Composition

Learning from Explanations with Neural Execution Tree

1 code implementation ICLR 2020 Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, Xiang Ren

While deep neural networks have achieved impressive performance on a range of NLP tasks, these data-hungry models heavily rely on labeled data, which restricts their applications in scenarios where data annotation is expensive.

Data Augmentation Multi-hop Question Answering +6

HMEAE: Hierarchical Modular Event Argument Extraction

1 code implementation IJCNLP 2019 Xiaozhi Wang, Ziqi Wang, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Maosong Sun, Jie zhou, Xiang Ren

Existing event extraction methods classify each argument role independently, ignoring the conceptual correlations between different argument roles.

Event Argument Extraction Event Extraction +1

SetExpan: Corpus-Based Set Expansion via Context Feature Selection and Rank Ensemble

1 code implementation17 Oct 2019 Jiaming Shen, Zeqiu Wu, Dongming Lei, Jingbo Shang, Xiang Ren, Jiawei Han

In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features.

feature selection Question Answering

Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling

1 code implementation ACL 2020 Ouyu Lan, Xiao Huang, Bill Yuchen Lin, He Jiang, Liyuan Liu, Xiang Ren

Its performance is largely influenced by the annotation quality and quantity in supervised learning scenarios, and obtaining ground truth labels is often costly.

Recurrent Event Network : Global Structure Inference Over Temporal Knowledge Graph

no code implementations25 Sep 2019 Woojeong Jin, He Jiang, Meng Qu, Tong Chen, Changlin Zhang, Pedro Szekely, Xiang Ren

We present Recurrent Event Network (RE-Net), a novel autoregressive architecture for modeling temporal sequences of multi-relational graphs (e. g., temporal knowledge graph), which can perform sequential, global structure inference over future time stamps to predict new events.

Link Prediction Temporal Sequences

NERO: A Neural Rule Grounding Framework for Label-Efficient Relation Extraction

2 code implementations5 Sep 2019 Wenxuan Zhou, Hongtao Lin, Bill Yuchen Lin, Ziqi Wang, Junyi Du, Leonardo Neves, Xiang Ren

The soft matching module learns to match rules with semantically similar sentences such that raw corpora can be automatically labeled and leveraged by the RE module (in a much better coverage) as augmented supervision, in addition to the exactly matched sentences.

Relation Extraction

Learning Dynamic Context Augmentation for Global Entity Linking

2 code implementations IJCNLP 2019 Xiyuan Yang, Xiaotao Gu, Sheng Lin, Siliang Tang, Yueting Zhuang, Fei Wu, Zhigang Chen, Guoping Hu, Xiang Ren

Despite of the recent success of collective entity linking (EL) methods, these "global" inference methods may yield sub-optimal results when the "all-mention coherence" assumption breaks, and often suffer from high computational cost at the inference stage, due to the complex search space.

Entity Disambiguation Entity Linking +1

KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning

2 code implementations IJCNLP 2019 Bill Yuchen Lin, Xinyue Chen, Jamin Chen, Xiang Ren

Commonsense reasoning aims to empower machines with the human ability to make presumptions about ordinary situations in our daily life.

Ranked #24 on Common Sense Reasoning on CommonsenseQA (using extra training data)

Common Sense Reasoning Knowledge Base Question Answering +2

Collaborative Policy Learning for Open Knowledge Graph Reasoning

2 code implementations IJCNLP 2019 Cong Fu, Tong Chen, Meng Qu, Woojeong Jin, Xiang Ren

We propose a novel reinforcement learning framework to train two collaborative agents jointly, i. e., a multi-hop graph reasoner and a fact extractor.

Hierarchical Text Classification with Reinforced Label Assignment

1 code implementation IJCNLP 2019 Yuning Mao, Jingjing Tian, Jiawei Han, Xiang Ren

While existing hierarchical text classification (HTC) methods attempt to capture label hierarchies for model training, they either make local decisions regarding each label or completely ignore the hierarchy information during inference.

 Ranked #1 on Text Classification on RCV1 (Macro F1 metric)

General Classification text-classification +1

Facet-Aware Evaluation for Extractive Summarization

1 code implementation ACL 2020 Yuning Mao, Liyuan Liu, Qi Zhu, Xiang Ren, Jiawei Han

In this paper, we present a facet-aware evaluation setup for better assessment of the information coverage in extracted summaries.

Extractive Summarization Text Summarization

Raw-to-End Name Entity Recognition in Social Media

1 code implementation14 Aug 2019 Liyuan Liu, Zihan Wang, Jingbo Shang, Dandong Yin, Heng Ji, Xiang Ren, Shaowen Wang, Jiawei Han

Our model neither requires the conversion from character sequences to word sequences, nor assumes tokenizer can correctly detect all word boundaries.

named-entity-recognition Named Entity Recognition +1

AlpacaTag: An Active Learning-based Crowd Annotation Framework for Sequence Tagging

no code implementations ACL 2019 Bill Yuchen Lin, Dong-Ho Lee, Frank F. Xu, Ouyu Lan, Xiang Ren

We introduce an open-source web-based data annotation framework (AlpacaTag) for sequence tagging tasks such as named-entity recognition (NER).

Active Learning named-entity-recognition +2

Cascade-BGNN: Toward Efficient Self-supervised Representation Learning on Large-scale Bipartite Graphs

1 code implementation27 Jun 2019 Chaoyang He, Tian Xie, Yu Rong, Wenbing Huang, Junzhou Huang, Xiang Ren, Cyrus Shahabi

Existing techniques either cannot be scaled to large-scale bipartite graphs that have limited labels or cannot exploit the unique structure of bipartite graphs, which have distinct node features in two domains.

Recommendation Systems Representation Learning

Eliciting Knowledge from Experts:Automatic Transcript Parsing for Cognitive Task Analysis

2 code implementations26 Jun 2019 Junyi Du, He Jiang, Jiaming Shen, Xiang Ren

To reduce human efforts and scale the process, automated CTA transcript parsing is desirable.

Relation Extraction

Dynamic Network Embedding via Incremental Skip-gram with Negative Sampling

1 code implementation9 Jun 2019 Hao Peng, Jian-Xin Li, Hao Yan, Qiran Gong, Senzhang Wang, Lin Liu, Lihong Wang, Xiang Ren

Most existing methods focus on learning the structural representations of vertices in a static network, but cannot guarantee an accurate and efficient embedding in a dynamic network scenario.

Link Prediction Multi-Label Classification +1

Characterizing and Forecasting User Engagement with In-app Action Graph: A Case Study of Snapchat

1 code implementation2 Jun 2019 Yozen Liu, Xiaolin Shi, Lucas Pierce, Xiang Ren

Here we propose to formalize individual user's in-app action transition patterns as a temporally evolving action graph, and analyze its characteristics in terms of informing future user engagement.

Time Series Analysis

Time-Series Event Prediction with Evolutionary State Graph

3 code implementations10 May 2019 Wenjie Hu, Yang Yang, Ziqiang Cheng, Carl Yang, Xiang Ren

In this paper, we present evolutionary state graph, a dynamic graph structure designed to systematically represent the evolving relations (edges) among states (nodes) along time.

Time Series Time Series Classification +1

Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction

1 code implementation IJCNLP 2019 Qinyuan Ye, Liyuan Liu, Maosen Zhang, Xiang Ren

In this paper, we study the problem what limits the performance of DS-trained neural models, conduct thorough analyses, and identify a factor that can influence the performance greatly, shifted label distribution.

Relation Extraction