no code implementations • NAACL (SocialNLP) 2021 • Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, Nanyun Peng
Discrepancies exist among different cultures or languages.
1 code implementation • EMNLP 2021 • Jiao Sun, Xuezhe Ma, Nanyun Peng
We propose to control paraphrase generation through carefully chosen target syntactic structures to generate more proper and higher quality paraphrases.
1 code implementation • NAACL 2022 • Alexander Spangher, Xiang Ren, Jonathan May, Nanyun Peng
News article revision histories provide clues to narrative and factual evolution in news articles.
no code implementations • ACL 2022 • Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, Mark Warschauer
Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.
no code implementations • EMNLP 2021 • Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng
While these tasks partially evaluate machines’ ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.
no code implementations • EMNLP 2021 • Zi-Yi Dou, Nanyun Peng
Phrase grounding aims to map textual phrases to their associated image regions, which can be a prerequisite for multimodal reasoning and can benefit tasks requiring identifying objects based on language.
no code implementations • 30 Oct 2024 • Haoyi Qiu, Alexander R. Fabbri, Divyansh Agarwal, Kung-Hsiang Huang, Sarah Tan, Nanyun Peng, Chien-Sheng Wu
To address these, we introduce CASA, a benchmark designed to assess LLM agents' sensitivity to cultural and social norms across two web-based tasks: online shopping and social discussion forums.
1 code implementation • 27 Oct 2024 • Xuan He, Da Yin, Nanyun Peng
Specifically, training on hard task supervision with the same outcome error rates but disparate step-wise error rates can lead to a 30\% accuracy gap on MATH benchmark.
no code implementations • 26 Oct 2024 • Zhecheng Li, Yiwei Wang, Bryan Hooi, Yujun Cai, Zhen Xiong, Nanyun Peng, Kai-Wei Chang
Text classification involves categorizing a given text, such as determining its sentiment or identifying harmful content.
no code implementations • 26 Oct 2024 • Zhecheng Li, Yiwei Wang, Bryan Hooi, Yujun Cai, Naifan Cheung, Nanyun Peng, Kai-Wei Chang
To resolve this question, we fully explore the potential of large language models on cross-lingual summarization task for low-resource languages through our four-step zero-shot method: Summarization, Improvement, Translation and Refinement (SITR) with correspondingly designed prompts.
no code implementations • 24 Oct 2024 • Tanmay Parekh, Jeffrey Kwan, Jiarui Yu, Sparsh Johri, Hyosang Ahn, Sreya Muppalla, Kai-Wei Chang, Wei Wang, Nanyun Peng
However, these works only focused on English posts, while epidemics can occur anywhere in the world, and early discussions are often in the local, non-English languages.
1 code implementation • 20 Oct 2024 • Yuankai Li, Jia-Chen Gu, Di wu, Kai-Wei Chang, Nanyun Peng
Based on our synthetic data built entirely by open-source models, BRIEF generates more concise summaries and enables a range of LLMs to achieve exceptional open-domain question answering (QA) performance.
no code implementations • 10 Oct 2024 • WenBo Hu, Jia-Chen Gu, Zi-Yi Dou, Mohsen Fayyaz, Pan Lu, Kai-Wei Chang, Nanyun Peng
In this paper, we introduce a multimodal retrieval-augmented generation benchmark, MRAG-Bench, in which we systematically identify and categorize scenarios where visually augmented knowledge is better than textual knowledge, for instance, more images from varying viewpoints.
no code implementations • 9 Oct 2024 • Thomas Palmeira Ferraz, Kartik Mehta, Yu-Hsiang Lin, Haw-Shiuan Chang, Shereen Oraby, Sijia Liu, Vivek Subramanian, Tagyoung Chung, Mohit Bansal, Nanyun Peng
To address the performance gap between open-source and proprietary models, we propose the Decompose, Critique and Refine (DeCRIM) self-correction pipeline, which enhances LLMs' ability to follow constraints.
no code implementations • 6 Oct 2024 • Bingxuan Li, Yiwei Wang, Tao Meng, Kai-Wei Chang, Nanyun Peng
This paper investigates controllable generation for large language models (LLMs) with prompt-based control, focusing on Lexically Constrained Generation (LCG).
no code implementations • 4 Oct 2024 • Yufei Tian, Zeyu Pan, Nanyun Peng
The increasing capability of large language models (LLMs) to generate fluent long-form texts is presenting new challenges in distinguishing machine-generated outputs from human-written ones, which is crucial for ensuring authenticity and trustworthiness of expressions.
no code implementations • 5 Sep 2024 • Cheng Wang, Yiwei Wang, Bryan Hooi, Yujun Cai, Nanyun Peng, Kai-Wei Chang
The training data in large language models is key to their success, but it also presents privacy and security risks, as it may contain sensitive information.
no code implementations • 30 Aug 2024 • Songyan Zhao, Bingxuan Li, Yufei Tian, Nanyun Peng
Automatic melody-to-lyric generation aims to produce lyrics that align with a given melody.
no code implementations • 19 Aug 2024 • Xiaomeng Jin, Jeonghwan Kim, Yu Zhou, Kuan-Hao Huang, Te-Lin Wu, Nanyun Peng, Heng Ji
To address these issues, we propose Attribute-based Multimodal Data Augmentation (ARMADA), a novel multimodal data augmentation method via knowledge-guided manipulation of visual attributes of the mentioned entities.
no code implementations • 7 Aug 2024 • Zi-Yi Dou, Xitong Yang, Tushar Nagarajan, Huiyu Wang, Jing Huang, Nanyun Peng, Kris Kitani, Fu-Jen Chu
We present EMBED (Egocentric Models Built with Exocentric Data), a method designed to transform exocentric video-language data for egocentric video representation learning.
no code implementations • 2 Aug 2024 • Ashima Suvarna, Xiao Liu, Tanmay Parekh, Kai-Wei Chang, Nanyun Peng
In QUD parsing, each sentence is viewed as an answer to a question triggered by an anchor sentence in prior context.
1 code implementation • 18 Jul 2024 • Yufei Tian, Tenghao Huang, Miri Liu, Derek Jiang, Alexander Spangher, Muhao Chen, Jonathan May, Nanyun Peng
This paper investigates the capability of LLMs in storytelling, focusing on narrative development and plot progression.
no code implementations • 28 Jun 2024 • Mohsen Fayyaz, Fan Yin, Jiao Sun, Nanyun Peng
We study how well large language models (LLMs) explain their generations through rationales -- a set of tokens extracted from the input text that reflect the decision-making process of LLMs.
1 code implementation • 20 Jun 2024 • Silin Meng, Yiwei Wang, Cheng-Fu Yang, Nanyun Peng, Kai-Wei Chang
Path planning is a fundamental scientific problem in robotics and autonomous navigation, requiring the derivation of efficient routes from starting to destination points while avoiding obstacles.
1 code implementation • 19 Jun 2024 • Xueqing Wu, Zongyu Lin, Songyan Zhao, Te-Lin Wu, Pan Lu, Nanyun Peng, Kai-Wei Chang
Visual programs are executable code generated by large language models to address visual reasoning problems.
1 code implementation • 19 Jun 2024 • Honghua Zhang, Po-Nien Kung, Masahiro Yoshida, Guy Van Den Broeck, Nanyun Peng
Despite the success of Large Language Models (LLMs) on various tasks following human instructions, controlling model generation at inference time poses a persistent challenge.
1 code implementation • 19 Jun 2024 • Di wu, Jia-Chen Gu, Fan Yin, Nanyun Peng, Kai-Wei Chang
Retrieval-augmented language models (RALMs) have shown strong performance and wide applicability in knowledge-intensive tasks.
1 code implementation • 18 Jun 2024 • Fabrice Harel-Canada, Hanyu Zhou, Sreya Muppalla, Zeynep Yildiz, Miryung Kim, Amit Sahai, Nanyun Peng
By shifting the focus from text to reader, the Psychological Depth Scale is a validated, automated, and systematic means of measuring the capacity of LLMs to connect with humans through the stories they tell.
no code implementations • 11 Jun 2024 • Haw-Shiuan Chang, Nanyun Peng, Mohit Bansal, Anil Ramakrishna, Tagyoung Chung
If a LLM's entropy is higher than the asymptotic entropy (i. e., the LLM is more uncertain than it should be), the THF model predicts a high hallucination hazard, which leads to a lower p threshold in REAL sampling.
no code implementations • 8 Jun 2024 • I-Hung Hsu, Zifeng Wang, Long T. Le, Lesly Miculicich, Nanyun Peng, Chen-Yu Lee, Tomas Pfister
Grounded generation aims to equip language models (LMs) with the ability to produce more credible and accountable responses by accurately citing verifiable sources.
1 code implementation • 3 Jun 2024 • Zi-Yi Dou, Cheng-Fu Yang, Xueqing Wu, Kai-Wei Chang, Nanyun Peng
Finetuning language agents with reasoning-action trajectories is effective, but obtaining these trajectories from human annotations or stronger models is costly and sometimes impractical.
1 code implementation • 29 May 2024 • WenBo Hu, Zi-Yi Dou, Liunian Harold Li, Amita Kamath, Nanyun Peng, Kai-Wei Chang
This raises the question: can we achieve flexibility in the number of visual tokens to suit different tasks and computational resources?
no code implementations • 8 May 2024 • Xuehai He, Jian Zheng, Jacob Zhiyuan Fang, Robinson Piramuthu, Mohit Bansal, Vicente Ordonez, Gunnar A Sigurdsson, Nanyun Peng, Xin Eric Wang
Controllable text-to-image (T2I) diffusion models generate images conditioned on both text prompts and semantic inputs of other modalities like edge maps.
no code implementations • 27 Apr 2024 • Masoud Monajatipoor, Zi-Yi Dou, Aichi Chien, Nanyun Peng, Kai-Wei Chang
Vision-language models have become increasingly powerful for tasks that require an understanding of both visual and linguistic elements, bridging the gap between these modalities.
1 code implementation • 25 Apr 2024 • Chujie Zheng, Ziqi Wang, Heng Ji, Minlie Huang, Nanyun Peng
Through experiments with twelve open-source LLMs on HuggingFace, we demonstrate that ExPO consistently improves off-the-shelf DPO/RLHF models, as evaluated on the mainstream LLM benchmarks AlpacaEval 2. 0 and MT-Bench.
1 code implementation • 22 Apr 2024 • Haoyi Qiu, WenBo Hu, Zi-Yi Dou, Nanyun Peng
Our work also highlights the critical balance between faithfulness and coverage of model outputs, and encourages future works to address hallucinations in LVLMs while keeping their outputs informative.
no code implementations • 7 Apr 2024 • Hritik Bansal, Po-Nien Kung, P. Jeffrey Brantingham, Kai-Wei Chang, Nanyun Peng
In this paper, we propose GenEARL, a training-free generative framework that harness the power of the modern generative models to understand event task descriptions given image contexts to perform the EARL task.
no code implementations • 3 Apr 2024 • Ashima Suvarna, Harshita Khandelwal, Nanyun Peng
To this end, we present PhonologyBench, a novel benchmark consisting of three diagnostic tasks designed to explicitly test the phonological skills of LLMs in English: grapheme-to-phoneme conversion, syllable counting, and rhyme word generation.
1 code implementation • 2 Apr 2024 • Tanmay Parekh, Anh Mac, Jiarui Yu, Yuxuan Dong, Syed Shahriar, Bonnie Liu, Eric Yang, Kuan-Hao Huang, Wei Wang, Nanyun Peng, Kai-Wei Chang
In our work, we pioneer exploiting Event Detection (ED) for better preparedness and early warnings of any upcoming epidemic by developing a framework to extract and analyze epidemic-related events from social media posts.
1 code implementation • 31 Mar 2024 • Hritik Bansal, Ashima Suvarna, Gantavya Bhatt, Nanyun Peng, Kai-Wei Chang, Aditya Grover
A common technique for aligning large language models (LLMs) relies on acquiring human preferences by comparing multiple generations conditioned on a fixed context.
no code implementations • 22 Mar 2024 • I-Hung Hsu, Zihan Xue, Nilay Pochh, Sahil Bansal, Premkumar Natarajan, Jayanth Srinivasa, Nanyun Peng
Event linking connects event mentions in text with relevant nodes in a knowledge base (KB).
no code implementations • 5 Mar 2024 • Zefan Cai, Po-Nien Kung, Ashima Suvarna, Mingyu Derek Ma, Hritik Bansal, Baobao Chang, P. Jeffrey Brantingham, Wei Wang, Nanyun Peng
We hypothesize that a diverse set of event types and definitions are the key for models to learn to follow event definitions while existing event extraction datasets focus on annotating many high-quality examples for a few event types.
1 code implementation • 4 Mar 2024 • Xueqing Wu, Rui Zheng, Jingzhen Sha, Te-Lin Wu, Hanyu Zhou, Mohan Tang, Kai-Wei Chang, Nanyun Peng, Haoran Huang
We construct the DACO dataset, containing (1) 440 databases (of tabular data) collected from real-world scenarios, (2) ~2k query-answer pairs that can serve as weak supervision for model training, and (3) a concentrated but high-quality test set with human refined annotations that serves as our main evaluation benchmark.
2 code implementations • 31 Jan 2024 • Chujie Zheng, Fan Yin, Hao Zhou, Fandong Meng, Jie zhou, Kai-Wei Chang, Minlie Huang, Nanyun Peng
In this work, we investigate how LLMs' behavior (i. e., complying with or refusing user queries) is affected by safety prompts from the perspective of model representation.
1 code implementation • 24 Jan 2024 • Rohan Wadhawan, Hritik Bansal, Kai-Wei Chang, Nanyun Peng
We conduct experiments to assess the performance of 14 foundation models (GPT-4V, Gemini-Pro-Vision, LLaVA-Next) and establish a human performance baseline.
2 code implementations • 19 Jan 2024 • Yiwei Wang, Muhao Chen, Nanyun Peng, Kai-Wei Chang
How to edit the knowledge in multi-step reasoning has become the major challenge in the knowledge editing (KE) of large language models (LLMs).
1 code implementation • 9 Jan 2024 • Jia-Chen Gu, Hao-Xiang Xu, Jun-Yu Ma, Pan Lu, Zhen-Hua Ling, Kai-Wei Chang, Nanyun Peng
Model editing is a technique that edits the large language models (LLMs) with updated knowledge to alleviate hallucinations without resource-intensive retraining.
no code implementations • 1 Jan 2024 • Wenxuan Wang, Haonan Bai, Jen-tse Huang, Yuxuan Wan, Youliang Yuan, Haoyi Qiu, Nanyun Peng, Michael R. Lyu
BiasPainter uses a diverse range of seed images of individuals and prompts the image generation models to edit these images using gender, race, and age-neutral queries.
1 code implementation • 16 Nov 2023 • Haoyi Qiu, Kung-Hsiang Huang, Jingnong Qu, Nanyun Peng
Prior works on evaluating factual consistency of summarization often take the entailment-based approaches that first generate perturbed (factual inconsistent) summaries and then train a classifier on the generated data to detect the factually inconsistencies during testing time.
Abstractive Text Summarization Natural Language Inference +1
1 code implementation • 16 Nov 2023 • Kuan-Hao Huang, I-Hung Hsu, Tanmay Parekh, Zhiyu Xie, Zixuan Zhang, Premkumar Natarajan, Kai-Wei Chang, Nanyun Peng, Heng Ji
In this work, we identify and address evaluation challenges, including inconsistency due to varying data assumptions or preprocessing steps, the insufficiency of current evaluation frameworks that may introduce dataset or data split bias, and the low reproducibility of some previous approaches.
1 code implementation • 16 Nov 2023 • Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas L. Griffiths, Faeze Brahman
We explore the creative problem-solving capabilities of modern LLMs in a novel constrained setting.
no code implementations • 16 Nov 2023 • Alexander Spangher, Emilio Ferrara, Ben Welsh, Nanyun Peng, Serdar Tumgoren, Jonathan May
Journalists must find stories in huge amounts of textual data (e. g. leaks, bills, press releases) as part of their jobs: determining when and why text becomes news can help us understand coverage patterns and help us build assistive tools.
1 code implementation • 2 Nov 2023 • Te-Lin Wu, Zi-Yi Dou, Qingyuan Hu, Yu Hou, Nischal Reddy Chandra, Marjorie Freedman, Ralph M. Weischedel, Nanyun Peng
Multimodal counterfactual reasoning is a vital yet challenging ability for AI systems.
1 code implementation • 1 Nov 2023 • Po-Nien Kung, Fan Yin, Di wu, Kai-Wei Chang, Nanyun Peng
Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions.
no code implementations • 25 Oct 2023 • Yufei Tian, Felix Zhang, Nanyun Peng
Large language models (LLMs) such as GPT-3 have demonstrated a strong capability to generate coherent and contextually relevant text.
1 code implementation • 23 Oct 2023 • Te-Lin Wu, Yu Zhou, Nanyun Peng
The ability to actively ground task instructions from an egocentric view is crucial for AI agents to accomplish tasks or assist humans virtually.
1 code implementation • 23 Oct 2023 • Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, John Frederick Wieting, Nanyun Peng, Xuezhe Ma
While recent studies have looked into the abilities of large language models in various benchmark tasks, including question generation, reading comprehension, multilingual and etc, there have been few studies looking into the controllability of large language models on generation tasks.
1 code implementation • 13 Oct 2023 • Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng
Through benchmarking evaluation on 2 popular LLMs- ChatGPT and Alpaca, we reveal significant gender biases in LLM-generated recommendation letters.
no code implementations • 13 Oct 2023 • Mingyu Derek Ma, Jiun-Yu Kao, Arpit Gupta, Yu-Hsiang Lin, Wenbo Zhao, Tagyoung Chung, Wei Wang, Kai-Wei Chang, Nanyun Peng
Based on the intuition that a model would lean to be more biased if it learns from a biased example, we measure the bias level of a query instance by observing its influence on another instance.
1 code implementation • 8 Oct 2023 • Yixin Wan, Jieyu Zhao, Aman Chadha, Nanyun Peng, Kai-Wei Chang
Recent advancements in Large Language Models empower them to follow freeform instructions, including imitating generic or specific demographic personas in conversations.
no code implementations • 4 Oct 2023 • Mingyu Derek Ma, Alexander K. Taylor, Nuan Wen, Yanchen Liu, Po-Nien Kung, Wenna Qin, Shicheng Wen, Azure Zhou, Diyi Yang, Xuezhe Ma, Nanyun Peng, Wei Wang
We present MIDDAG, an intuitive, interactive system that visualizes the information propagation paths on social media triggered by COVID-19-related news articles accompanied by comprehensive insights, including user/community susceptibility level, as well as events and popular opinions raised by the crowd while propagating the information.
1 code implementation • 16 Sep 2023 • Tanmay Parekh, I-Hung Hsu, Kuan-Hao Huang, Kai-Wei Chang, Nanyun Peng
Label projection, which involves obtaining translated labels and texts jointly, is essential for leveraging machine translation to facilitate cross-lingual transfer in structured prediction tasks.
2 code implementations • 24 Jul 2023 • Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian
We propose Reinforcement Learning from Contrastive Distillation (RLCD), a method for aligning language models to follow principles expressed in natural language (e. g., to be more harmless) without using human feedback.
no code implementations • 27 Jun 2023 • Xiang 'Anthony' Chen, Jeff Burke, Ruofei Du, Matthew K. Hong, Jennifer Jacobs, Philippe Laban, DIngzeyu Li, Nanyun Peng, Karl D. D. Willis, Chien-Sheng Wu, Bolei Zhou
Through iterative, cross-disciplinary discussions, we define and propose next-steps for Human-centered Generative AI (HGAI).
1 code implementation • 20 Jun 2023 • Sidi Lu, Wenbo Zhao, Chenyang Tao, Arpit Gupta, Shanchan Wu, Tagyoung Chung, Nanyun Peng
NeurAlly-Decomposed Oracle (NADO) is a powerful approach for controllable generation with large language models.
1 code implementation • 20 Jun 2023 • Sidi Lu, Hongyi Liu, Asli Celikyilmaz, Tianlu Wang, Nanyun Peng
We investigate CDM for open-domain text generation evaluation under two paradigms: 1) _Generative_ CDM, which harnesses the contrast of two language models' distributions to generate synthetic examples for training discriminator-based metrics; 2) _Discriminative_ CDM, which directly uses distribution disparities between two language models for evaluation.
1 code implementation • 30 May 2023 • Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, YiWen Chen, Tagyoung Chung, Jing Huang, Nanyun Peng
Automatic melody-to-lyric generation is a task in which song lyrics are generated to go with a given melody.
no code implementations • 26 May 2023 • Paulina Toro Isaza, Guangxuan Xu, Akintoye Oloko, Yufang Hou, Nanyun Peng, Dakuo Wang
Social biases and stereotypes are embedded in our culture in part through their presence in our stories, as evidenced by the rich history of humanities and social science literature analyzing such biases in children stories.
1 code implementation • 26 May 2023 • I-Hung Hsu, Zhiyu Xie, Kuan-Hao Huang, Prem Natarajan, Nanyun Peng
However, existing generation-based EAE models mostly focus on problem re-formulation and prompt design, without incorporating additional information that has been shown to be effective for classification-based models, such as the abstract meaning representation (AMR) of the input passages.
no code implementations • 26 May 2023 • I-Hung Hsu, Avik Ray, Shubham Garg, Nanyun Peng, Jing Huang
In this work, we study the problem of synthesizing code-switched texts for language pairs absent from the training data.
no code implementations • 24 May 2023 • Mingyu Derek Ma, Xiaoxuan Wang, Po-Nien Kung, P. Jeffrey Brantingham, Nanyun Peng, Wei Wang
Information extraction tasks such as event extraction require an in-depth understanding of the output structure and sub-task dependencies.
1 code implementation • 24 May 2023 • Alexander Spangher, Nanyun Peng, Jonathan May, Emilio Ferrara
News articles are driven by the informational sources journalists use in reporting.
1 code implementation • 24 May 2023 • Haoyi Qiu, Zi-Yi Dou, Tianlu Wang, Asli Celikyilmaz, Nanyun Peng
Model-based evaluation metrics (e. g., CLIPScore and GPTScore) have demonstrated decent correlations with human judgments in various language generation tasks.
no code implementations • 23 May 2023 • Zi-Yi Dou, Feng Gao, Nanyun Peng
In this paper, we introduce a masked path modeling (MPM) objective, which pretrains an agent using self-collected data for downstream navigation tasks.
no code implementations • 19 May 2023 • Po-Nien Kung, Nanyun Peng
Our experiments show that models trained on simplified task definition or delusive examples can achieve comparable performance to the ones trained on the original instructions and examples.
1 code implementation • 12 May 2023 • Sarik Ghazarian, Yijia Shao, Rujun Han, Aram Galstyan, Nanyun Peng
We take the first step by focusing on event commonsense that considers events and their relations, and is crucial in both dialogues and general commonsense reasoning.
no code implementations • 12 May 2023 • Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, Tagyoung Chung, Jing Huang, Nanyun Peng
At inference time, we leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process.
1 code implementation • 15 Apr 2023 • Honghua Zhang, Meihua Dang, Nanyun Peng, Guy Van Den Broeck
To overcome this challenge, we propose to use tractable probabilistic models (TPMs) to impose lexical constraints in autoregressive text generation models, which we refer to as GeLaTo (Generating Language with Tractable Constraints).
no code implementations • 26 Jan 2023 • Mingyu Derek Ma, Jiun-Yu Kao, Shuyang Gao, Arpit Gupta, Di Jin, Tagyoung Chung, Nanyun Peng
Dialogue state tracking (DST) is an important step in dialogue management to keep track of users' beliefs.
no code implementations • 5 Jan 2023 • Alexander Spangher, Xinyu Hua, Yao Ming, Nanyun Peng
While GPT-2 generates sentences that are remarkably human-like, longer documents can ramble and do not follow human-like writing structure.
1 code implementation • CVPR 2023 • Xueyan Zou, Zi-Yi Dou, Jianwei Yang, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai, Harkirat Behl, JianFeng Wang, Lu Yuan, Nanyun Peng, Lijuan Wang, Yong Jae Lee, Jianfeng Gao
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly.
Ranked #4 on Instance Segmentation on ADE20K val (using extra training data)
1 code implementation • 20 Dec 2022 • Kevin Yang, Dan Klein, Nanyun Peng, Yuandong Tian
In human evaluations of automatically generated stories, DOC substantially outperforms a strong Re3 baseline (Yang et al., 2022) on plot coherence (22. 5% absolute gain), outline relevance (28. 2%), and interestingness (20. 7%).
1 code implementation • 3 Dec 2022 • Arshiya Aggarwal, Jiao Sun, Nanyun Peng
These fixed prefix templates could themselves be specific in terms of styles or linguistic structures, which may lead to unreliable fairness conclusions that are not representative of the general trends from tone varying prompts.
no code implementations • 25 Nov 2022 • Zhixuan Zhou, Jiao Sun, Jiaxin Pei, Nanyun Peng, JinJun Xiong
Our analysis further reveal stereotypical portrayals of both male and female characters in terms of moral foundations and events.
1 code implementation • 24 Oct 2022 • Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Tagyoung Chung, Jing Huang, Yang Liu, Nanyun Peng
The tasks of humor understanding and generation are challenging and subjective even for humans, requiring commonsense and real-world knowledge to master.
1 code implementation • 24 Oct 2022 • Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Shuyang Gao, Tagyoung Chung, Jing Huang, Yang Liu, Nanyun Peng
In this work, we propose a new task, context-situated pun generation, where a specific context represented by a set of keywords is provided, and the task is to first identify suitable pun words that are appropriate for the context, then generate puns based on the context keywords and the identified pun words.
1 code implementation • 24 Oct 2022 • Yufei Tian, Divyanshu Sheth, Nanyun Peng
We propose a unified framework to generate both homophonic and homographic puns to resolve the split-up in existing works.
1 code implementation • 22 Oct 2022 • Guangxuan Xu, Ruibo Liu, Fabrice Harel-Canada, Nischal Reddy Chandra, Nanyun Peng
We propose EnDex, the first human-reaction based model to evaluate dialogue engagingness.
2 code implementations • 16 Oct 2022 • Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, Nanyun Peng
This task requires machines to 1) understand long text inputs and 2) produce a globally consistent image sequence that illustrates the contents of the story.
1 code implementation • 13 Oct 2022 • Kevin Yang, Yuandong Tian, Nanyun Peng, Dan Klein
We consider the problem of automatically generating longer stories of over two thousand words.
no code implementations • 24 Sep 2022 • Nanyun Peng
Recent advances in large pre-trained language models have demonstrated strong results in generating natural languages and significantly improved performances for many natural language generation (NLG) applications such as machine translation and text summarization.
no code implementations • 17 Aug 2022 • Guangxuan Xu, Paulina Toro Isaza, Moshi Li, Akintoye Oloko, Bingsheng Yao, Cassia Sanctos, Aminat Adebiyi, Yufang Hou, Nanyun Peng, Dakuo Wang
To understand a narrative, it is essential to comprehend the temporal event flows, especially those associated with main characters; however, this can be challenging with lengthy and unstructured narrative texts.
no code implementations • 16 Aug 2022 • Mingyu Derek Ma, Alexander K. Taylor, Wei Wang, Nanyun Peng
Event extraction for the clinical domain is an under-explored research area.
1 code implementation • NeurIPS 2022 • Zi-Yi Dou, Aishwarya Kamath, Zhe Gan, Pengchuan Zhang, JianFeng Wang, Linjie Li, Zicheng Liu, Ce Liu, Yann Lecun, Nanyun Peng, Jianfeng Gao, Lijuan Wang
Vision-language (VL) pre-training has recently received considerable attention.
Ranked #1 on Phrase Grounding on Flickr30k Entities Dev
1 code implementation • 14 Jun 2022 • Alexander Spangher, Xiang Ren, Jonathan May, Nanyun Peng
News article revision histories provide clues to narrative and factual evolution in news articles.
1 code implementation • NAACL 2022 • Zi-Yi Dou, Nanyun Peng
The speaker-follower models have proven to be effective in vision-and-language navigation, where a speaker model is used to synthesize new instructions to augment the training data for a follower navigation model.
4 code implementations • 9 Jun 2022 • Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu
BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.
1 code implementation • 27 May 2022 • Tao Meng, Sidi Lu, Nanyun Peng, Kai-Wei Chang
We propose a general and efficient framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO).
no code implementations • 25 May 2022 • Jiao Sun, Yu Hou, Jiin Kim, Nanyun Peng
Then, we collect human annotations for the helpfulness of dialogue responses based on our definition and build a classifier to automatically determine the helpfulness of a response.
1 code implementation • 25 May 2022 • I-Hung Hsu, Kuan-Hao Huang, Shuning Zhang, Wenxin Cheng, Premkumar Natarajan, Kai-Wei Chang, Nanyun Peng
In this work, we propose to take a unified view of all these tasks and introduce TAGPRIME to address relational structure extraction problems.
1 code implementation • 25 May 2022 • Te-Lin Wu, Caiqi Zhang, Qingyuan Hu, Alex Spangher, Nanyun Peng
The ability to infer pre- and postconditions of an action is vital for comprehending complex instructions, and is essential for applications such as autonomous instruction-guided agents and assistive AI that supports humans to perform physical tasks.
1 code implementation • 25 May 2022 • Tanmay Parekh, I-Hung Hsu, Kuan-Hao Huang, Kai-Wei Chang, Nanyun Peng
We utilize this ontology to further introduce GENEVA, a diverse generalizability benchmarking dataset comprising four test suites, aimed at evaluating models' ability to handle limited data and unseen event type generalization.
1 code implementation • Findings (ACL) 2022 • Fabrice Harel-Canada, Muhammad Ali Gulzar, Nanyun Peng, Miryung Kim
The vast majority of text transformation techniques in NLP are inherently limited in their ability to expand input space coverage due to an implicit constraint to preserve the original class label.
1 code implementation • NAACL 2022 • Anirudh Mittal, Yufei Tian, Nanyun Peng
In this paper, we propose a simple yet effective way to generate pun sentences that does not require any training on existing puns.
1 code implementation • NAACL 2022 • Rujun Han, Hong Chen, Yufei Tian, Nanyun Peng
Stories or narratives are comprised of a sequence of events.
1 code implementation • NAACL 2022 • Yufei Tian, Nanyun Peng
Poetry generation, and creative language generation in general, usually suffers from the lack of large training data.
1 code implementation • 26 Mar 2022 • Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Jia-Jun Li, Nora Bradford, Branda Sun, Tran Bao Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, Mark Warschauer
Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.
Ranked #1 on Question Generation on FairytaleQA
1 code implementation • ACL 2022 • Sarik Ghazarian, Nuan Wen, Aram Galstyan, Nanyun Peng
We also show that DEAM can distinguish between coherent and incoherent dialogues generated by baseline manipulations, whereas those baseline models cannot detect incoherent examples generated by DEAM.
1 code implementation • ACL 2022 • Kuan-Hao Huang, I-Hung Hsu, Premkumar Natarajan, Kai-Wei Chang, Nanyun Peng
We present a study on leveraging multilingual pre-trained generative language models for zero-shot cross-lingual event argument extraction (EAE).
1 code implementation • 1 Jan 2022 • Zi-Yi Dou, Nanyun Peng
In this paper, we instead focus on better utilizing the \textit{implicit knowledge} stored in pre-trained language models.
3 code implementations • CVPR 2022 • Zi-Yi Dou, Yichong Xu, Zhe Gan, JianFeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, Zicheng Liu, Michael Zeng
Vision-and-language (VL) pre-training has proven to be highly effective on various VL downstream tasks.
Ranked #20 on Cross-Modal Retrieval on COCO 2014 (using extra training data)
no code implementations • ACL 2022 • Te-Lin Wu, Alex Spangher, Pegah Alipoormolabashi, Marjorie Freedman, Ralph Weischedel, Nanyun Peng
The ability to sequence unordered events is an essential skill to comprehend and reason about real world task procedures, which often requires thorough understanding of temporal common sense and multimodal information, as these procedures are often communicated through a combination of texts and images.
1 code implementation • Findings (ACL) 2022 • Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, Minlie Huang
We propose a taxonomy for dialogue safety specifically designed to capture unsafe behaviors in human-bot dialogue settings, with focuses on context-sensitive unsafety, which is under-explored in prior works.
2 code implementations • NAACL 2022 • Vijit Malik, Sunipa Dev, Akihiro Nishi, Nanyun Peng, Kai-Wei Chang
Language representations are efficient tools used across NLP applications, but they are strife with encoded societal biases.
no code implementations • Findings (EMNLP) 2021 • Mingyu Derek Ma, Muhao Chen, Te-Lin Wu, Nanyun Peng
Taxonomies are valuable resources for many applications, but the limited coverage due to the expensive manual curation process hinders their general applicability.
1 code implementation • EMNLP 2021 • Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, Kai-Wei Chang
Commonsense is defined as the knowledge that is shared by everyone.
Ranked #1 on Visual Commonsense Reasoning on GD-VCR
Cultural Vocal Bursts Intensity Prediction Visual Commonsense Reasoning
1 code implementation • EMNLP 2021 • Kung-Hsiang Huang, Sam Tang, Nanyun Peng
Document-level entity-based extraction (EE), aiming at extracting entity-centric information such as entity roles and entity relations, is key to automatic knowledge acquisition from text corpora for various domains.
Ranked #1 on Role-filler Entity Extraction on MUC-4
1 code implementation • Findings (EMNLP) 2021 • Yufei Tian, Arvind Krishna Sridhar, Nanyun Peng
A hyperbole is an intentional and creative exaggeration not to be taken literally.
no code implementations • COLING 2022 • Xiaofei Sun, Yufei Tian, Yuxian Meng, Nanyun Peng, Fei Wu, Jiwei Li, Chun Fan
Then based on the paraphrase pairs produced by these UMT models, a unified surrogate model can be trained to serve as the final \sts model to generate paraphrases, which can be directly used for test in the unsupervised setup, or be finetuned on labeled datasets in the supervised setup.
2 code implementations • NAACL 2022 • I-Hung Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Prem Natarajan, Kai-Wei Chang, Nanyun Peng
Given a passage and a manually designed prompt, DEGREE learns to summarize the events mentioned in the passage into a natural sentence that follows a predefined pattern.
no code implementations • 7 Aug 2021 • Sunipa Dev, Emily Sheng, Jieyu Zhao, Aubrie Amstutz, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Akihiro Nishi, Nanyun Peng, Kai-Wei Chang
Recent studies show that Natural Language Processing (NLP) technologies propagate societal biases about demographic groups associated with attributes such as gender, race, and nationality.
1 code implementation • ACL 2021 • Jiao Sun, Nanyun Peng
Human activities can be seen as sequences of events, which are crucial to understanding societies.
1 code implementation • Findings (ACL) 2021 • Shikhar Singh, Nuan Wen, Yu Hou, Pegah Alipoormolabashi, Te-Lin Wu, Xuezhe Ma, Nanyun Peng
To this end, we introduce a new commonsense reasoning benchmark dataset comprising natural language true/false statements, with each sample paired with its complementary counterpart, resulting in 4k sentence pairs.
1 code implementation • ACL 2021 • Kevin Stowe, Tuhin Chakrabarty, Nanyun Peng, Smaranda Muresan, Iryna Gurevych
Guided by conceptual metaphor theory, we propose to control the generation process by encoding conceptual mappings between cognitive domains to generate meaningful metaphoric expressions.
no code implementations • NAACL 2021 • Emily Sheng, Kai-Wei Chang, Prem Natarajan, Nanyun Peng
Ad hominem attacks are those that target some feature of a person{'}s character instead of the position the person is maintaining.
1 code implementation • ACL 2021 • Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng
Technology for language generation has advanced rapidly, spurred by advancements in pre-training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner.
1 code implementation • 19 Apr 2021 • Alexander Spangher, Nanyun Peng, Jonathan May, Emilio Ferrara
Journalists publish statements provided by people, or \textit{sources} to contextualize current events, help voters make informed decisions, and hold powerful individuals accountable.
no code implementations • 19 Apr 2021 • Alexander Spangher, Nanyun Peng, Jonathan May, Emilio Ferrara
Journalists obtain "leads", or story ideas, by reading large corpora of government records: court cases, proposed bills, etc.
1 code implementation • 18 Apr 2021 • Emily Sheng, Josh Arnold, Zhou Yu, Kai-Wei Chang, Nanyun Peng
Dialogue systems in the form of chatbots and personal assistants are being increasingly integrated into people's lives.
1 code implementation • EMNLP 2021 • Kuan-Hao Huang, Wasi Uddin Ahmad, Nanyun Peng, Kai-Wei Chang
Pre-trained multilingual language encoders, such as multilingual BERT and XLM-R, show great potential for zero-shot cross-lingual transfer.