Search Results for author: Xiang Ren

Found 195 papers, 119 papers with code

Using Word Embedding to Reveal Monetary Policy Explanation Changes

no code implementations EMNLP (ECONLP) 2021 Akira Matsui, Xiang Ren, Emilio Ferrara

Documents have been an essential tool of communication for governments to announce their policy operations.

Sentiment Analysis

ER-TEST Evaluating Explanation Regularization Methods for NLP Models

no code implementations NAACL (TrustNLP) 2022 Brihi Joshi, Aaron Chan, Ziyi Liu, Xiang Ren

For the latter, explanation regularization (ER) aims to improve NLM generalization by pushing the machine rationales to align with human rationales.

Knowledge-Augmented Methods for Natural Language Processing

no code implementations ACL 2022 Chenguang Zhu, Yichong Xu, Xiang Ren, Bill Lin, Meng Jiang, Wenhao Yu

Knowledge in natural language processing (NLP) has been a rising trend especially after the advent of large scale pre-trained models.

Text Generation

Modality-specific Distillation

no code implementations NAACL (maiworkshop) 2021 Woojeong Jin, Maziar Sanjabi, Shaoliang Nie, Liang Tan, Xiang Ren, Hamed Firooz

In this paper, we propose modality-specific distillation (MSD) to effectively transfer knowledge from a teacher on multimodal datasets.

Knowledge Distillation Meta-Learning

Diverging Preferences: When do Annotators Disagree and do Models Know?

no code implementations18 Oct 2024 Michael JQ Zhang, Zhilin Wang, Jena D. Hwang, Yi Dong, Olivier Delalleau, Yejin Choi, Eunsol Choi, Xiang Ren, Valentina Pyatkin

We find that the majority of disagreements are in opposition with standard reward modeling approaches, which are designed with the assumption that annotator disagreement is noise.

WildVis: Open Source Visualizer for Million-Scale Chat Logs in the Wild

no code implementations5 Sep 2024 Yuntian Deng, Wenting Zhao, Jack Hessel, Xiang Ren, Claire Cardie, Yejin Choi

The increasing availability of real-world conversation data offers exciting opportunities for researchers to study user-chatbot interactions.

Chatbot

Rethinking Backdoor Detection Evaluation for Language Models

no code implementations31 Aug 2024 Jun Yan, Wenjie Jacky Mo, Xiang Ren, Robin Jia

Backdoor detection methods aim to detect whether a released model contains a backdoor, so that practitioners can avoid such vulnerabilities.

Symbolic Working Memory Enhances Language Models for Complex Rule Application

1 code implementation24 Aug 2024 Siyuan Wang, Zhongyu Wei, Yejin Choi, Xiang Ren

Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning involving a series of rule application steps, especially when rules are presented non-sequentially.

Stress-Testing Long-Context Language Models with Lifelong ICL and Task Haystack

1 code implementation23 Jul 2024 Xiaoyue Xu, Qinyuan Ye, Xiang Ren

We introduce Lifelong ICL, a problem setting that challenges long-context language models (LMs) to learn a sequence of language tasks through in-context learning (ICL).

In-Context Learning Navigate

Rel-A.I.: An Interaction-Centered Approach To Measuring Human-LM Reliance

no code implementations10 Jul 2024 Kaitlyn Zhou, Jena D. Hwang, Xiang Ren, Nouha Dziri, Dan Jurafsky, Maarten Sap

The ability to communicate uncertainty, risk, and limitation is crucial for the safety of large language models.

Sentence

Demystifying Language Model Forgetting with Low-rank Example Associations

no code implementations20 Jun 2024 Xisen Jin, Xiang Ren

Leveraging the low-rank nature of the associations, we predict forgetting of upstream examples when fine-tuning on unseen tasks with matrix completion over the empirical associations.

Language Modelling Matrix Completion

WildChat: 1M ChatGPT Interaction Logs in the Wild

no code implementations2 May 2024 Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, Yuntian Deng

In addition to timestamped chat transcripts, we enrich the dataset with demographic data, including state, country, and hashed IP addresses, alongside request headers.

Chatbot Instruction Following

CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting

1 code implementation16 Apr 2024 Huihan Li, Liwei Jiang, Jena D. Hwang, Hyunwoo Kim, Sebastin Santy, Taylor Sorensen, Bill Yuchen Lin, Nouha Dziri, Xiang Ren, Yejin Choi

As the utilization of large language models (LLMs) has proliferated world-wide, it is crucial for them to have adequate knowledge and fair representation for diverse global cultures.

Diversity Fairness

Logits of API-Protected LLMs Leak Proprietary Information

no code implementations14 Mar 2024 Matthew Finlayson, Xiang Ren, Swabha Swayamdipta

Large language model (LLM) providers often hide the architectural details and parameters of their proprietary models by restricting public access to a limited API.

Language Modelling Large Language Model

WinoViz: Probing Visual Properties of Objects Under Different States

no code implementations21 Feb 2024 Woojeong Jin, Tejas Srinivasan, Jesse Thomason, Xiang Ren

We present WinoViz, a text-only evaluation dataset, consisting of 1, 380 examples that probe the reasoning abilities of language models regarding variant visual properties of objects under different contexts or states.

Language Modelling

Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs

1 code implementation18 Feb 2024 Siyuan Wang, Zhongyu Wei, Yejin Choi, Xiang Ren

Our analysis of GPT-series models over a rule subset reveals significant gaps in LLMs' logic understanding compared to human performance, especially in compositional and structural complex rules with certain bias patterns.

Logical Reasoning

Self-Discover: Large Language Models Self-Compose Reasoning Structures

2 code implementations6 Feb 2024 Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V. Le, Ed H. Chi, Denny Zhou, Swaroop Mishra, Huaixiu Steven Zheng

We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods.

Math

Are Machines Better at Complex Reasoning? Unveiling Human-Machine Inference Gaps in Entailment Verification

no code implementations6 Feb 2024 Soumya Sanyal, Tianyi Xiao, Jiacheng Liu, Wenya Wang, Xiang Ren

Finally, we use this model to filter out inconsistent model-generated rationales in self-consistency decoding, resulting in a 6% accuracy improvement on average across three MCQ datasets.

Benchmarking Multiple-choice +3

What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement

no code implementations2 Feb 2024 Xisen Jin, Xiang Ren

We propose a partially interpretable forecasting model based on the observation that changes in pre-softmax logit scores of pretraining examples resemble that of online learned examples, which performs decently on BART but fails on T5 models.

Language Modelling

Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty

no code implementations12 Jan 2024 Kaitlyn Zhou, Jena D. Hwang, Xiang Ren, Maarten Sap

As natural language becomes the default interface for human-AI interaction, there is a need for LMs to appropriately communicate uncertainties in downstream applications.

Wireless Communications in Cavity: A Reconfigurable Boundary Modulation based Approach

no code implementations15 Nov 2023 Xuehui Dong, Xiang Ren, Bokai Lai, Rujing Xiong, Tiebin Mi, Robert Caiming Qiu

This paper explores the potential wireless communication applications of Reconfigurable Intelligent Surfaces (RIS) in reverberant wave propagation environments.

Position

In Search of the Long-Tail: Systematic Generation of Long-Tail Inferential Knowledge via Logical Rule Guided Search

1 code implementation13 Nov 2023 Huihan Li, Yuting Ning, Zeyi Liao, Siyuan Wang, Xiang Lorraine Li, Ximing Lu, Wenting Zhao, Faeze Brahman, Yejin Choi, Xiang Ren

To effectively use large language models (LLMs) for real-world queries, it is imperative that they generalize to the long-tail distribution, i. e. rare examples where models exhibit low confidence.

Language Modelling Natural Language Inference +1

Tailoring Self-Rationalizers with Multi-Reward Distillation

1 code implementation6 Nov 2023 Sahana Ramnath, Brihi Joshi, Skyler Hallinan, Ximing Lu, Liunian Harold Li, Aaron Chan, Jack Hessel, Yejin Choi, Xiang Ren

Results on five difficult question-answering datasets StrategyQA, QuaRel, OpenBookQA, NumerSense and QASC show that not only does MaRio improve task accuracy, but it also improves the self-rationalization quality of small LMs across the aforementioned axes better than a supervised fine-tuning (SFT) baseline.

Diversity Question Answering +1

Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance

no code implementations16 Oct 2023 Jesse Zhang, Jiahui Zhang, Karl Pertsch, Ziyi Liu, Xiang Ren, Minsuk Chang, Shao-Hua Sun, Joseph J. Lim

Instead, our approach BOSS (BOotStrapping your own Skills) learns to accomplish new tasks by performing "skill bootstrapping," where an agent with a set of primitive skills interacts with the environment to practice new skills without receiving reward feedback for tasks outside of the initial skill set.

Language Modelling Large Language Model

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

1 code implementation12 Oct 2023 Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren

The ability to derive underlying principles from a handful of observations and then generalize to novel situations -- known as inductive reasoning -- is central to human intelligence.

DOMINO: A Dual-System for Multi-step Visual Language Reasoning

1 code implementation4 Oct 2023 Peifang Wang, Olga Golovneva, Armen Aghajanyan, Xiang Ren, Muhao Chen, Asli Celikyilmaz, Maryam Fazel-Zarandi

By fine-tuning the System-2 module (LLaMA-2 70B) on only a small amount of data on multi-step reasoning, the accuracy of our method is further improved and surpasses the best fully-supervised end-to-end approach by 5. 7% and a pipeline approach with FlanPaLM (540B) by 7. 5% on a challenging dataset with human-authored questions.

Arithmetic Reasoning Language Modelling +2

How FaR Are Large Language Models From Agents with Theory-of-Mind?

1 code implementation4 Oct 2023 Pei Zhou, Aman Madaan, Srividya Pranavi Potharaju, Aditya Gupta, Kevin R. McKee, Ari Holtzman, Jay Pujara, Xiang Ren, Swaroop Mishra, Aida Nematzadeh, Shyam Upadhyay, Manaal Faruqui

We propose a new evaluation paradigm for large language models (LLMs): Thinking for Doing (T4D), which requires models to connect inferences about others' mental states to actions in social scenarios.

In-Context Learning Question Answering

Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection

1 code implementation31 Jul 2023 Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang, Vijay Srinivasan, Xiang Ren, Hongxia Jin

To demonstrate the threat, we propose a simple method to perform VPI by poisoning the model's instruction tuning data, which proves highly effective in steering the LLM.

Backdoor Attack

Instruction-following Evaluation through Verbalizer Manipulation

no code implementations20 Jul 2023 Shiyang Li, Jun Yan, Hai Wang, Zheng Tang, Xiang Ren, Vijay Srinivasan, Hongxia Jin

We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them.

Instruction Following

LLM-Blender: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion

3 code implementations5 Jun 2023 Dongfu Jiang, Xiang Ren, Bill Yuchen Lin

We present LLM-Blender, an ensembling framework designed to attain consistently superior performance by leveraging the diverse strengths of multiple open-source large language models (LLMs).

Faith and Fate: Limits of Transformers on Compositionality

1 code implementation NeurIPS 2023 Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi

We formulate compositional tasks as computation graphs to systematically quantify the level of complexity, and break down reasoning steps into intermediate sub-procedures.

SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks

2 code implementations NeurIPS 2023 Bill Yuchen Lin, Yicheng Fu, Karina Yang, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Prithviraj Ammanabrolu, Yejin Choi, Xiang Ren

The Swift module is a small encoder-decoder LM fine-tuned on the oracle agent's action trajectories, while the Sage module employs LLMs such as GPT-4 for subgoal planning and grounding.

Decoder

How Predictable Are Large Language Model Capabilities? A Case Study on BIG-bench

1 code implementation24 May 2023 Qinyuan Ye, Harvey Yiyun Fu, Xiang Ren, Robin Jia

We investigate the predictability of large language model (LLM) capabilities: given records of past experiments using different model families, numbers of parameters, tasks, and numbers of in-context examples, can we accurately predict LLM performance on new experiment configurations?

Diversity Language Modelling +1

GRILL: Grounded Vision-language Pre-training via Aligning Text and Image Regions

no code implementations24 May 2023 Woojeong Jin, Subhabrata Mukherjee, Yu Cheng, Yelong Shen, Weizhu Chen, Ahmed Hassan Awadallah, Damien Jose, Xiang Ren

Generalization to unseen tasks is an important ability for few-shot learners to achieve better zero-/few-shot performance on diverse tasks.

Object Question Answering +2

Estimating Large Language Model Capabilities without Labeled Test Data

1 code implementation24 May 2023 Harvey Yiyun Fu, Qinyuan Ye, Albert Xu, Xiang Ren, Robin Jia

In this paper, we propose the task of ICL accuracy estimation, in which we predict the accuracy of an LLM when doing in-context learning on a new task given only unlabeled test data for that task.

In-Context Learning Language Modelling +1

Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

1 code implementation11 May 2023 Brihi Joshi, Ziyi Liu, Sahana Ramnath, Aaron Chan, Zhewei Tong, Shaoliang Nie, Qifan Wang, Yejin Choi, Xiang Ren

Existing metrics like task performance of the LM generating the rationales, or similarity between generated and gold rationales are not good indicators of their human utility.

SCOTT: Self-Consistent Chain-of-Thought Distillation

1 code implementation3 May 2023 Peifeng Wang, Zhengyang Wang, Zheng Li, Yifan Gao, Bing Yin, Xiang Ren

While CoT can yield dramatically improved performance, such gains are only observed for sufficiently large LMs.

counterfactual Counterfactual Reasoning +1

Design of Reconfigurable Intelligent Surfaces for Wireless Communication: A Review

no code implementations27 Apr 2023 Rujing Xiong, Jianan Zhang, Fuhai Wang, Zhengyu Wang, Xiang Ren, Junshuo Liu, Jialong Lu, Kai Wan, Tiebin Mi, Robert Caiming Qiu

The prototype undergoes rigorous empirical evaluation, encompassing multi-hop RIS signal amplification, image reconstruction, and real-world indoor signal coverage experiments.

Image Reconstruction

Exploring Distributional Shifts in Large Language Models for Code Analysis

no code implementations16 Mar 2023 Shushan Arakelyan, Rocktim Jyoti Das, Yi Mao, Xiang Ren

We systematically study how three large language models with code capabilities - CodeT5, Codex, and ChatGPT - generalize to out-of-domain data.

Code Generation Code Summarization

Dataless Knowledge Fusion by Merging Weights of Language Models

1 code implementation19 Dec 2022 Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, Pengxiang Cheng

In this paper, we study the problem of merging individual models built on different training data sets to obtain a single model that performs well both across all data set domains and can generalize on out-of-domain data.

Multi-Task Learning

KNIFE: Distilling Reasoning Knowledge From Free-Text Rationales

no code implementations19 Dec 2022 Aaron Chan, Zhiyuan Zeng, Wyatt Lake, Brihi Joshi, Hanjie Chen, Xiang Ren

First, KNIFE finetunes a teacher LM (given task input and FTR) to predict the task output, transferring reasoning knowledge from the FTRs to the teacher's hidden states.

Knowledge Distillation Language Modelling +1

APOLLO: A Simple Approach for Adaptive Pretraining of Language Models for Logical Reasoning

no code implementations19 Dec 2022 Soumya Sanyal, Yichong Xu, Shuohang Wang, ZiYi Yang, Reid Pryzant, Wenhao Yu, Chenguang Zhu, Xiang Ren

Logical reasoning of text is an important ability that requires understanding the information present in the text, their interconnections, and then reasoning through them to infer new conclusions.

Data Augmentation Language Modelling +3

Contrastive Novelty-Augmented Learning: Anticipating Outliers with Large Language Models

1 code implementation28 Nov 2022 Albert Xu, Xiang Ren, Robin Jia

In many task settings, text classification models are likely to encounter examples from novel classes on which they cannot predict correctly.

Language Modelling Large Language Model +2

Reflect, Not Reflex: Inference-Based Common Ground Improves Dialogue Response Quality

no code implementations16 Nov 2022 Pei Zhou, Hyundong Cho, Pegah Jandaghi, Dong-Ho Lee, Bill Yuchen Lin, Jay Pujara, Xiang Ren

Human communication relies on common ground (CG), the mutual knowledge and beliefs shared by participants, to produce coherent and interesting conversations.

Response Generation

PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales

1 code implementation3 Nov 2022 Peifeng Wang, Aaron Chan, Filip Ilievski, Muhao Chen, Xiang Ren

Neural language models (LMs) have achieved impressive results on various language-based reasoning tasks by utilizing latent knowledge encoded in their own pretrained parameters.

counterfactual Decision Making

XMD: An End-to-End Framework for Interactive Explanation-Based Debugging of NLP Models

no code implementations30 Oct 2022 Dong-Ho Lee, Akshen Kadakia, Brihi Joshi, Aaron Chan, Ziyi Liu, Kiran Narahari, Takashi Shibuya, Ryosuke Mitani, Toshiyuki Sekiya, Jay Pujara, Xiang Ren

Explanation-based model debugging aims to resolve spurious biases by showing human users explanations of model behavior, asking users to give feedback on the behavior, then using the feedback to update the model.

text-classification Text Classification

MMGA: Multimodal Learning with Graph Alignment

no code implementations18 Oct 2022 Xuan Yang, Quanjin Tao, Xiao Feng, Donghong Cai, Xiang Ren, Yang Yang

In this paper, we propose MMGA (Multimodal learning with Graph Alignment), a novel multimodal pre-training framework to incorporate information from graph (social network), image and text modalities on social media to enhance user representation learning.

Representation Learning

REV: Information-Theoretic Evaluation of Free-Text Rationales

1 code implementation10 Oct 2022 Hanjie Chen, Faeze Brahman, Xiang Ren, Yangfeng Ji, Yejin Choi, Swabha Swayamdipta

More concretely, we propose a metric called REV (Rationale Evaluation with conditional V-information), to quantify the amount of new, label-relevant information in a rationale beyond the information already available in the input or the label.

On Grounded Planning for Embodied Tasks with Language Models

no code implementations29 Aug 2022 Bill Yuchen Lin, Chengsong Huang, Qian Liu, Wenda Gu, Sam Sommerer, Xiang Ren

Language models (LMs) have demonstrated their capability in possessing commonsense knowledge of the physical world, a crucial aspect of performing tasks in everyday life.

Curriculum Learning for Data-Efficient Vision-Language Alignment

no code implementations29 Jul 2022 Tejas Srinivasan, Xiang Ren, Jesse Thomason

Aligning image and text encoders from scratch using contrastive learning requires large amounts of paired image-text data.

Contrastive Learning Image Retrieval +3

Retweet-BERT: Political Leaning Detection Using Language Features and Information Diffusion on Social Networks

1 code implementation18 Jul 2022 Julie Jiang, Xiang Ren, Emilio Ferrara

We introduce Retweet-BERT, a simple and scalable model to estimate the political leanings of Twitter users.

FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text Rationales

no code implementations2 Jul 2022 Aaron Chan, Shaoliang Nie, Liang Tan, Xiaochang Peng, Hamed Firooz, Maziar Sanjabi, Xiang Ren

Following how humans communicate, free-text rationales aim to use natural language to explain neural language model (LM) behavior.

Hallucination Language Modelling +2

NewsEdits: A News Article Revision Dataset and a Document-Level Reasoning Challenge

1 code implementation14 Jun 2022 Alexander Spangher, Xiang Ren, Jonathan May, Nanyun Peng

News article revision histories provide clues to narrative and factual evolution in news articles.

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

4 code implementations9 Jun 2022 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Common Sense Reasoning Math +1

Machine Translation Robustness to Natural Asemantic Variation

1 code implementation25 May 2022 Jacob Bremerman, Xiang Ren, Jonathan May

We find that existing MT models fail when presented with NAV data, but we demonstrate strategies to improve performance on NAV by fine-tuning them with human-generated variations.

Machine Translation Translation

BITE: Textual Backdoor Attacks with Iterative Trigger Injection

1 code implementation25 May 2022 Jun Yan, Vansh Gupta, Xiang Ren

We propose BITE, a backdoor attack that poisons the training data to establish strong correlations between the target label and a set of "trigger words".

Backdoor Attack Hate Speech Detection +3

RobustLR: Evaluating Robustness to Logical Perturbation in Deductive Reasoning

1 code implementation25 May 2022 Soumya Sanyal, Zeyi Liao, Xiang Ren

Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in English natural language.

Logical Reasoning Negation

Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts

1 code implementation25 May 2022 Qinyuan Ye, Juan Zha, Xiang Ren

Recent works suggest that transformer models are capable of multi-tasking on diverse NLP tasks and adapting to new tasks efficiently.

Multi-Task Learning World Knowledge +1

Cross-lingual Lifelong Learning

1 code implementation23 May 2022 Meryem M'hamdi, Xiang Ren, Jonathan May

The longstanding goal of multi-lingual learning has been to develop a universal cross-lingual model that can withstand the changes in multi-lingual data distributions.

Continual Learning Transfer Learning

NS3: Neuro-Symbolic Semantic Code Search

1 code implementation21 May 2022 Shushan Arakelyan, Anna Hakhverdyan, Miltiadis Allamanis, Luis Garcia, Christophe Hauser, Xiang Ren

We compare our model - NS3 (Neuro-Symbolic Semantic Search) - to a number of baselines, including state-of-the-art semantic code retrieval methods, and evaluate on two datasets - CodeSearchNet and Code Search and Question Answering.

Code Search Question Answering +2

On Continual Model Refinement in Out-of-Distribution Data Streams

no code implementations ACL 2022 Bill Yuchen Lin, Sida Wang, Xi Victoria Lin, Robin Jia, Lin Xiao, Xiang Ren, Wen-tau Yih

Real-world natural language processing (NLP) models need to be continually updated to fix the prediction errors in out-of-distribution (OOD) data streams while overcoming catastrophic forgetting.

Benchmarking Continual Learning

Unsupervised Cross-Task Generalization via Retrieval Augmentation

1 code implementation17 Apr 2022 Bill Yuchen Lin, Kangmin Tan, Chris Miller, Beiwen Tian, Xiang Ren

Humans can perform unseen tasks by recalling relevant skills acquired previously and then generalizing them to the target tasks, even if there is no supervision at all.

Retrieval

FaiRR: Faithful and Robust Deductive Reasoning over Natural Language

1 code implementation ACL 2022 Soumya Sanyal, Harman Singh, Xiang Ren

Recent works show that such models can also produce the reasoning steps (i. e., the proof graph) that emulate the model's logical reasoning process.

Fact Selection Logical Reasoning

Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-modal Knowledge Transfer

no code implementations ACL 2022 Woojeong Jin, Dong-Ho Lee, Chenguang Zhu, Jay Pujara, Xiang Ren

Pre-trained language models are still far from human performance in tasks that need understanding of properties (e. g. appearance, measurable quantity) and affordances of everyday objects in the real world since the text lacks such information due to reporting bias.

Image Captioning Language Modelling +1

UNIREX: A Unified Learning Framework for Language Model Rationale Extraction

1 code implementation BigScience (ACL) 2022 Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, Hamed Firooz

An extractive rationale explains a language model's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction.

Language Modelling text-classification +1

Contextualized Scene Imagination for Generative Commonsense Reasoning

1 code implementation ICLR 2022 Peifeng Wang, Jonathan Zamora, Junfeng Liu, Filip Ilievski, Muhao Chen, Xiang Ren

In this paper, we propose an Imagine-and-Verbalize (I&V) method, which learns to imagine a relational scene knowledge graph (SKG) with relations between the input concepts, and leverage the SKG as a constraint when generating a plausible scene description.

Common Sense Reasoning Descriptive +2

Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models

1 code implementation NAACL 2022 Qinyuan Ye, Madian Khabsa, Mike Lewis, Sinong Wang, Xiang Ren, Aaron Jaech

Distilling state-of-the-art transformer models into lightweight student models is an effective way to reduce computation cost at inference time.

Domain Generalization Privacy Preserving +4

On the Robustness of Reading Comprehension Models to Entity Renaming

1 code implementation NAACL 2022 Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, Xiang Ren

We study the robustness of machine reading comprehension (MRC) models to entity renaming -- do models make more wrong predictions when the same questions are asked about an entity whose name has been changed?

Continual Pretraining Machine Reading Comprehension

KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering

no code implementations ACL 2022 Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, Michael Zeng

The recent proposed Fusion-in-Decoder (FiD), which is built on top of the pretrained generative model T5, achieves the state-of-the-art performance in the reading module.

Answer Generation Decoder +5

AutoTriggER: Label-Efficient and Robust Named Entity Recognition with Auxiliary Trigger Extraction

no code implementations10 Sep 2021 Dong-Ho Lee, Ravi Kiran Selvam, Sheikh Muhammad Sarwar, Bill Yuchen Lin, Fred Morstatter, Jay Pujara, Elizabeth Boschee, James Allan, Xiang Ren

Deep neural models for named entity recognition (NER) have shown impressive results in overcoming label scarcity and generalizing to unseen entities by leveraging distant supervision and auxiliary information such as explanations.

Low Resource Named Entity Recognition named-entity-recognition +2

Discretized Integrated Gradients for Explaining Language Models

2 code implementations EMNLP 2021 Soumya Sanyal, Xiang Ren

As a prominent attribution-based explanation algorithm, Integrated Gradients (IG) is widely adopted due to its desirable explanation axioms and the ease of gradient computation.

Feature Importance Sentiment Analysis +1

Improving Counterfactual Generation for Fair Hate Speech Detection

no code implementations ACL (WOAH) 2021 Aida Mostafazadeh Davani, Ali Omrani, Brendan Kennedy, Mohammad Atari, Xiang Ren, Morteza Dehghani

By applying logit pairing to equalize outcomes on the restricted set of counterfactuals for each instance, we improve fairness metrics while preserving model performance on hate speech detection.

counterfactual Fairness +2

Do Language Models Perform Generalizable Commonsense Inference?

1 code implementation Findings (ACL) 2021 Peifeng Wang, Filip Ilievski, Muhao Chen, Xiang Ren

Inspired by evidence that pretrained language models (LMs) encode commonsense knowledge, recent work has applied LMs to automatically populate commonsense knowledge graphs (CKGs).

Knowledge Graphs

Common Sense Beyond English: Evaluating and Improving Multilingual Language Models for Commonsense Reasoning

1 code implementation ACL 2021 Bill Yuchen Lin, Seyeon Lee, Xiaoyang Qiao, Xiang Ren

In addition, we also create two new datasets, X-CSQA and X-CODAH, by translating their English versions to 15 other languages, so that we can evaluate popular ML-LMs for cross-lingual commonsense reasoning.

Common Sense Reasoning Sentence

Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning

1 code implementation Findings (EMNLP) 2021 Xisen Jin, Bill Yuchen Lin, Mohammad Rostami, Xiang Ren

The ability to continuously expand knowledge over time and utilize it to rapidly generalize to new tasks is a key feature of human linguistic intelligence.

Continual Learning Few-Shot Learning +2

FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks

1 code implementation Findings (NAACL) 2022 Bill Yuchen Lin, Chaoyang He, Zihang Zeng, Hulin Wang, Yufen Huang, Christophe Dupuy, Rahul Gupta, Mahdi Soltanolkotabi, Xiang Ren, Salman Avestimehr

Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for natural language processing (NLP) tasks.

Benchmarking Federated Learning +5

CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP

3 code implementations EMNLP 2021 Qinyuan Ye, Bill Yuchen Lin, Xiang Ren

Humans can learn a new language task efficiently with only few examples, by leveraging their knowledge obtained when learning prior tasks.

Few-Shot Learning

Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation

1 code implementation EMNLP 2021 Mozhdeh Gheini, Xiang Ren, Jonathan May

We study the power of cross-attention in the Transformer architecture within the context of transfer learning for machine translation, and extend the findings of studies into cross-attention when training from scratch.

Machine Translation Transfer Learning +1

Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation

2 code implementations EMNLP 2021 Yuning Mao, Wenchang Ma, Deren Lei, Jiawei Han, Xiang Ren

In this paper, we present a systematic analysis that studies whether current seq2seq models, especially pre-trained language models, are good enough for preserving important input concepts and to what extent explicitly guiding generation with the concepts as lexical constraints is beneficial.

Conditional Text Generation Denoising

Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources

no code implementations EMNLP 2021 Ninareh Mehrabi, Pei Zhou, Fred Morstatter, Jay Pujara, Xiang Ren, Aram Galstyan

In addition, we analyze two downstream models that use ConceptNet as a source for commonsense knowledge and find the existence of biases in those models as well.

Refining Language Models with Compositional Explanations

1 code implementation NeurIPS 2021 Huihan Yao, Ying Chen, Qinyuan Ye, Xisen Jin, Xiang Ren

However, such a regularization technique lacks flexibility and coverage, since only importance scores towards a pre-defined list of features are adjusted, while more complex human knowledge such as feature interaction and pattern generalization can hardly be incorporated.

Fairness Language Modelling +2

Learning to Generate Task-Specific Adapters from Task Description

1 code implementation ACL 2021 Qinyuan Ye, Xiang Ren

Recent study further shows that they can learn to generalize to novel tasks, by including task descriptions as part of the source sequence and training the model with (source, target) examples.

Text Generation Zero-Shot Learning

Efficient Learning of Less Biased Models with Transfer Learning

no code implementations1 Jan 2021 Xisen Jin, Francesco Barbieri, Leonardo Neves, Xiang Ren

Prediction bias in machine learning models, referring to undesirable model behaviors that discriminates inputs mentioning or produced by certain group, has drawn increasing attention from the research community given its societal impact.

Transfer Learning

Learning Contextualized Knowledge Graph Structures for Commonsense Reasoning

no code implementations1 Jan 2021 Jun Yan, Mrigank Raman, Tianyu Zhang, Ryan Rossi, Handong Zhao, Sungchul Kim, Nedim Lipka, Xiang Ren

Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense) question answering and natural language inference.

Knowledge Graphs Natural Language Inference +1

Pre-training Text-to-Text Transformers to Write and Reason with Concepts

no code implementations ICLR 2021 Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Xiang Ren

To augment PTLMs with common sense, we propose generative and contrastive objectives as intermediate self-supervised pre-training tasks between general pre-training and downstream task-specific fine-tuning.

Common Sense Reasoning Language Modelling +2

ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning

2 code implementations EMNLP 2021 Rujun Han, Xiang Ren, Nanyun Peng

While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications.

Continual Pretraining Language Modelling +4

Pre-training Text-to-Text Transformers for Concept-centric Common Sense

1 code implementation24 Oct 2020 Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Bill Yuchen Lin, Xiang Ren

Pre-trained language models (PTLM) have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks.

Common Sense Reasoning Knowledge Graphs +3

Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation

2 code implementations24 Oct 2020 Yuning Mao, Xiang Ren, Heng Ji, Jiawei Han

Despite significant progress, state-of-the-art abstractive summarization methods are still prone to hallucinate content inconsistent with the source document.

Abstractive Text Summarization Keyphrase Extraction

On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning

no code implementations NAACL 2021 Xisen Jin, Francesco Barbieri, Brendan Kennedy, Aida Mostafazadeh Davani, Leonardo Neves, Xiang Ren

Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution.

coreference-resolution Fairness +6

Differentiable Open-Ended Commonsense Reasoning

no code implementations NAACL 2021 Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, William W. Cohen

As a step towards making commonsense reasoning research more realistic, we propose to study open-ended commonsense reasoning (OpenCSR) -- the task of answering a commonsense question without any pre-defined choices -- using as a resource only a corpus of commonsense facts written in natural language.

Multiple-choice

Fair Hate Speech Detection through Evaluation of Social Group Counterfactuals

no code implementations24 Oct 2020 Aida Mostafazadeh Davani, Ali Omrani, Brendan Kennedy, Mohammad Atari, Xiang Ren, Morteza Dehghani

Counterfactual token fairness for a mentioned social group evaluates the model's predictions as to whether they are the same for (a) the actual sentence and (b) a counterfactual instance, which is generated by changing the mentioned social group in the sentence.

counterfactual Fairness +2

One-shot Learning for Temporal Knowledge Graphs

no code implementations AKBC 2021 Mehrnoosh Mirtaheri, Mohammad Rostami, Xiang Ren, Fred Morstatter, Aram Galstyan

Most real-world knowledge graphs are characterized by a long-tail relation frequency distribution where a significant fraction of relations occurs only a handful of times.

Knowledge Graphs Link Prediction +2

Will This Idea Spread Beyond Academia? Understanding Knowledge Transfer of Scientific Concepts across Text Corpora

no code implementations Findings of the Association for Computational Linguistics 2020 Hancheng Cao, Mengjie Cheng, Zhepeng Cen, Daniel A. McFarland, Xiang Ren

We extract scientific concepts (i. e., phrases) from corpora as instantiations of "research ideas", create concept-level features as motivated by literature, and then follow the trajectories of over 450, 000 new concepts (emerged from 1995-2014) to identify factors that lead only a small proportion of these ideas to be used in inventions and drug trials.

Transfer Learning

SynSetExpan: An Iterative Framework for Joint Entity Set Expansion and Synonym Discovery

no code implementations EMNLP 2020 Jiaming Shen, Wenda Qiu, Jingbo Shang, Michelle Vanni, Xiang Ren, Jiawei Han

To facilitate the research on studying the interplays of these two tasks, we create the first large-scale Synonym-Enhanced Set Expansion (SE2) dataset via crowdsourcing.

Two Step Joint Model for Drug Drug Interaction Extraction

no code implementations28 Aug 2020 Siliang Tang, Qi Zhang, Tianpeng Zheng, Mengdi Zhou, Zhan Chen, Lixing Shen, Xiang Ren, Yueting Zhuang, ShiLiang Pu, Fei Wu

When patients need to take medicine, particularly taking more than one kind of drug simultaneously, they should be alarmed that there possibly exists drug-drug interaction.

Decoder Drug–drug Interaction Extraction +5

Gradient-based Editing of Memory Examples for Online Task-free Continual Learning

1 code implementation NeurIPS 2021 Xisen Jin, Arka Sadhu, Junyi Du, Xiang Ren

We explore task-free continual learning (CL), in which a model is trained to avoid catastrophic forgetting in the absence of explicit task boundaries or identities.

Continual Learning

Screenplay Quality Assessment: Can We Predict Who Gets Nominated?

no code implementations WS 2020 Ming-Chang Chiu, Tiantian Feng, Xiang Ren, Shrikanth Narayanan

Toward that goal, in this work, we present a method to evaluate the quality of a screenplay based on linguistic cues.

Contextualizing Hate Speech Classifiers with Post-hoc Explanation

3 code implementations ACL 2020 Brendan Kennedy, Xisen Jin, Aida Mostafazadeh Davani, Morteza Dehghani, Xiang Ren

Hate speech classifiers trained on imbalanced datasets struggle to determine if group identifiers like "gay" or "black" are used in offensive or prejudiced ways.

IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization

1 code implementation2 May 2020 Wenxuan Zhou, Bill Yuchen Lin, Xiang Ren

Fine-tuning pre-trained language models (PTLMs), such as BERT and its better variant RoBERTa, has been a common practice for advancing performance in natural language understanding (NLU) tasks.

Natural Language Understanding Representation Learning

Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models

no code implementations EMNLP 2020 Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, Xiang Ren

Recent works show that pre-trained language models (PTLMs), such as BERT, possess certain commonsense and factual knowledge.

RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms

no code implementations EMNLP 2021 Pei Zhou, Rahul Khanna, Seyeon Lee, Bill Yuchen Lin, Daniel Ho, Jay Pujara, Xiang Ren

Pre-trained language models (PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated.

ForecastQA: A Question Answering Challenge for Event Forecasting with Temporal Text Data

no code implementations ACL 2021 Woojeong Jin, Rahul Khanna, Suji Kim, Dong-Ho Lee, Fred Morstatter, Aram Galstyan, Xiang Ren

In this work, we aim to formulate a task, construct a dataset, and provide benchmarks for developing methods for event forecasting with large volumes of unstructured text data.

Knowledge Graphs Language Modelling +5

Visually Grounded Continual Learning of Compositional Phrases

2 code implementations EMNLP 2020 Xisen Jin, Junyi Du, Arka Sadhu, Ram Nevatia, Xiang Ren

To study this human-like language acquisition ability, we present VisCOLL, a visually grounded language learning task, which simulates the continual acquisition of compositional phrases from streaming visual scenes.

Continual Learning Grounded language learning +1

Teaching Machine Comprehension with Compositional Explanations

2 code implementations Findings of the Association for Computational Linguistics 2020 Qinyuan Ye, Xiao Huang, Elizabeth Boschee, Xiang Ren

Advances in machine reading comprehension (MRC) rely heavily on the collection of large scale human-annotated examples in the form of (question, paragraph, answer) triples.

Data Augmentation Machine Reading Comprehension +1

Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question Answering

2 code implementations EMNLP 2020 Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, Xiang Ren

Existing work on augmenting question answering (QA) models with external knowledge (e. g., knowledge graphs) either struggle to model multi-hop relations efficiently, or lack transparency into the model's prediction rationale.

Knowledge Graphs Question Answering +2

Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning

1 code implementation EMNLP 2020 Deren Lei, Gangrong Jiang, Xiaotao Gu, Kexuan Sun, Yuning Mao, Xiang Ren

Walk-based models have shown their advantages in knowledge graph (KG) reasoning by achieving decent performance while providing interpretable decisions.

reinforcement-learning Reinforcement Learning (RL)

Generating Natural Language Adversarial Examples on a Large Scale with Generative Models

no code implementations10 Mar 2020 Yankun Ren, Jianbin Lin, Siliang Tang, Jun Zhou, Shuang Yang, Yuan Qi, Xiang Ren

It can attack text classification models with a higher success rate than existing methods, and provide acceptable quality for humans in the meantime.

Adversarial Text General Classification +4

Temporal Attribute Prediction via Joint Modeling of Multi-Relational Structure Evolution

1 code implementation9 Mar 2020 Sankalp Garg, Navodita Sharma, Woojeong Jin, Xiang Ren

We show that if the information contained in the graph and the time series data are closely related, then this inter-dependence can be used to predict the time series with improved accuracy.

Attribute Knowledge Graphs +4

Mining News Events from Comparable News Corpora: A Multi-Attribute Proximity Network Modeling Approach

no code implementations14 Nov 2019 Hyungsul Kim, Ahmed El-Kishky, Xiang Ren, Jiawei Han

This proximity network captures the corpus-level co-occurence statistics for candidate event descriptors, event attributes, as well as their connections.

Attribute News Summarization

Improving BERT Fine-tuning with Embedding Normalization

no code implementations10 Nov 2019 Wenxuan Zhou, Junyi Du, Xiang Ren

Large pre-trained sentence encoders like BERT start a new chapter in natural language processing.

General Classification Sentence +2