no code implementations • NAACL (SocialNLP) 2021 • Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, Nanyun Peng
Discrepancies exist among different cultures or languages.
no code implementations • ACL 2022 • Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, Mark Warschauer
Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.
1 code implementation • NAACL 2022 • Alexander Spangher, Xiang Ren, Jonathan May, Nanyun Peng
News article revision histories provide clues to narrative and factual evolution in news articles.
1 code implementation • EMNLP 2021 • Jiao Sun, Xuezhe Ma, Nanyun Peng
We propose to control paraphrase generation through carefully chosen target syntactic structures to generate more proper and higher quality paraphrases.
no code implementations • EMNLP 2021 • Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng
While these tasks partially evaluate machines’ ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.
no code implementations • EMNLP 2021 • Zi-Yi Dou, Nanyun Peng
Phrase grounding aims to map textual phrases to their associated image regions, which can be a prerequisite for multimodal reasoning and can benefit tasks requiring identifying objects based on language.
no code implementations • 7 Apr 2024 • Hritik Bansal, Po-Nien Kung, P. Jeffrey Brantingham, Kai-Wei Chang, Nanyun Peng
In this paper, we propose GenEARL, a training-free generative framework that harness the power of the modern generative models to understand event task descriptions given image contexts to perform the EARL task.
no code implementations • 3 Apr 2024 • Ashima Suvarna, Harshita Khandelwal, Nanyun Peng
To this end, we present PhonologyBench, a novel benchmark consisting of three diagnostic tasks designed to explicitly test the phonological skills of LLMs in English: grapheme-to-phoneme conversion, syllable counting, and rhyme word generation.
1 code implementation • 2 Apr 2024 • Tanmay Parekh, Anh Mac, Jiarui Yu, Yuxuan Dong, Syed Shahriar, Bonnie Liu, Eric Yang, Kuan-Hao Huang, Wei Wang, Nanyun Peng, Kai-Wei Chang
In our work, we pioneer exploiting Event Detection (ED) for better preparedness and early warnings of any upcoming epidemic by developing a framework to extract and analyze epidemic-related events from social media posts.
1 code implementation • 31 Mar 2024 • Hritik Bansal, Ashima Suvarna, Gantavya Bhatt, Nanyun Peng, Kai-Wei Chang, Aditya Grover
A common technique for aligning large language models (LLMs) relies on acquiring human preferences by comparing multiple generations conditioned on a fixed context.
no code implementations • 22 Mar 2024 • I-Hung Hsu, Zihan Xue, Nilay Pochh, Sahil Bansal, Premkumar Natarajan, Jayanth Srinivasa, Nanyun Peng
Event linking connects event mentions in text with relevant nodes in a knowledge base (KB).
no code implementations • 5 Mar 2024 • Zefan Cai, Po-Nien Kung, Ashima Suvarna, Mingyu Derek Ma, Hritik Bansal, Baobao Chang, P. Jeffrey Brantingham, Wei Wang, Nanyun Peng
We hypothesize that a diverse set of event types and definitions are the key for models to learn to follow event definitions while existing event extraction datasets focus on annotating many high-quality examples for a few event types.
1 code implementation • 4 Mar 2024 • Xueqing Wu, Rui Zheng, Jingzhen Sha, Te-Lin Wu, Hanyu Zhou, Mohan Tang, Kai-Wei Chang, Nanyun Peng, Haoran Huang
We construct the DACO dataset, containing (1) 440 databases (of tabular data) collected from real-world scenarios, (2) ~2k query-answer pairs that can serve as weak supervision for model training, and (3) a concentrated but high-quality test set with human refined annotations that serves as our main evaluation benchmark.
1 code implementation • 31 Jan 2024 • Chujie Zheng, Fan Yin, Hao Zhou, Fandong Meng, Jie zhou, Kai-Wei Chang, Minlie Huang, Nanyun Peng
Prepending model inputs with safety prompts is a common practice for safeguarding large language models (LLMs) from complying with queries that contain harmful intents.
no code implementations • 24 Jan 2024 • Rohan Wadhawan, Hritik Bansal, Kai-Wei Chang, Nanyun Peng
Our findings reveal a significant performance gap of 30. 8% between the best-performing LMM, GPT-4V(ision), and human capabilities using human evaluation indicating substantial room for improvement in context-sensitive text-rich visual reasoning.
1 code implementation • 19 Jan 2024 • Yiwei Wang, Muhao Chen, Nanyun Peng, Kai-Wei Chang
To enforce these constraints, we utilize a depth-first search to adaptively substitute new knowledge for the LLMs' original reasoning steps, greedily seeking the optimal path of multi-hop reasoning with new knowledge.
1 code implementation • 9 Jan 2024 • Jia-Chen Gu, Hao-Xiang Xu, Jun-Yu Ma, Pan Lu, Zhen-Hua Ling, Kai-Wei Chang, Nanyun Peng
One critical challenge that has emerged is the presence of hallucinations in the output of large language models (LLMs) due to false or outdated knowledge.
no code implementations • 1 Jan 2024 • Wenxuan Wang, Haonan Bai, Jen-tse Huang, Yuxuan Wan, Youliang Yuan, Haoyi Qiu, Nanyun Peng, Michael R. Lyu
BiasPainter uses a diverse range of seed images of individuals and prompts the image generation models to edit these images using gender, race, and age-neutral queries.
1 code implementation • 16 Nov 2023 • Haoyi Qiu, Kung-Hsiang Huang, Jingnong Qu, Nanyun Peng
Prior works on evaluating factual consistency of summarization often take the entailment-based approaches that first generate perturbed (factual inconsistent) summaries and then train a classifier on the generated data to detect the factually inconsistencies during testing time.
Abstractive Text Summarization Natural Language Inference +1
1 code implementation • 16 Nov 2023 • Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas L. Griffiths, Faeze Brahman
We explore the creative problem-solving capabilities of modern LLMs in a novel constrained setting.
1 code implementation • 16 Nov 2023 • Kuan-Hao Huang, I-Hung Hsu, Tanmay Parekh, Zhiyu Xie, Zixuan Zhang, Premkumar Natarajan, Kai-Wei Chang, Nanyun Peng, Heng Ji
In this work, we identify and address evaluation challenges, including inconsistency due to varying data assumptions or preprocessing steps, the insufficiency of current evaluation frameworks that may introduce dataset or data split bias, and the low reproducibility of some previous approaches.
no code implementations • 16 Nov 2023 • Alexander Spangher, Emilio Ferrara, Ben Welsh, Nanyun Peng, Serdar Tumgoren, Jonathan May
Journalists must find stories in huge amounts of textual data (e. g. leaks, bills, press releases) as part of their jobs: determining when and why text becomes news can help us understand coverage patterns and help us build assistive tools.
1 code implementation • 2 Nov 2023 • Te-Lin Wu, Zi-Yi Dou, Qingyuan Hu, Yu Hou, Nischal Reddy Chandra, Marjorie Freedman, Ralph M. Weischedel, Nanyun Peng
Multimodal counterfactual reasoning is a vital yet challenging ability for AI systems.
1 code implementation • 1 Nov 2023 • Po-Nien Kung, Fan Yin, Di wu, Kai-Wei Chang, Nanyun Peng
Instruction tuning (IT) achieves impressive zero-shot generalization results by training large language models (LLMs) on a massive amount of diverse tasks with instructions.
no code implementations • 25 Oct 2023 • Yufei Tian, Felix Zhang, Nanyun Peng
Large language models (LLMs) such as GPT-3 have demonstrated a strong capability to generate coherent and contextually relevant text.
1 code implementation • 23 Oct 2023 • Te-Lin Wu, Yu Zhou, Nanyun Peng
The ability to actively ground task instructions from an egocentric view is crucial for AI agents to accomplish tasks or assist humans virtually.
1 code implementation • 23 Oct 2023 • Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, John Frederick Wieting, Nanyun Peng, Xuezhe Ma
While recent studies have looked into the abilities of large language models in various benchmark tasks, including question generation, reading comprehension, multilingual and etc, there have been few studies looking into the controllability of large language models on generation tasks.
1 code implementation • 13 Oct 2023 • Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng
Through benchmarking evaluation on 2 popular LLMs- ChatGPT and Alpaca, we reveal significant gender biases in LLM-generated recommendation letters.
no code implementations • 13 Oct 2023 • Mingyu Derek Ma, Jiun-Yu Kao, Arpit Gupta, Yu-Hsiang Lin, Wenbo Zhao, Tagyoung Chung, Wei Wang, Kai-Wei Chang, Nanyun Peng
Based on the intuition that a model would lean to be more biased if it learns from a biased example, we measure the bias level of a query instance by observing its influence on another instance.
1 code implementation • 8 Oct 2023 • Yixin Wan, Jieyu Zhao, Aman Chadha, Nanyun Peng, Kai-Wei Chang
Recent advancements in Large Language Models empower them to follow freeform instructions, including imitating generic or specific demographic personas in conversations.
no code implementations • 4 Oct 2023 • Mingyu Derek Ma, Alexander K. Taylor, Nuan Wen, Yanchen Liu, Po-Nien Kung, Wenna Qin, Shicheng Wen, Azure Zhou, Diyi Yang, Xuezhe Ma, Nanyun Peng, Wei Wang
We present MIDDAG, an intuitive, interactive system that visualizes the information propagation paths on social media triggered by COVID-19-related news articles accompanied by comprehensive insights, including user/community susceptibility level, as well as events and popular opinions raised by the crowd while propagating the information.
1 code implementation • 16 Sep 2023 • Tanmay Parekh, I-Hung Hsu, Kuan-Hao Huang, Kai-Wei Chang, Nanyun Peng
Label projection, which involves obtaining translated labels and texts jointly, is essential for leveraging machine translation to facilitate cross-lingual transfer in structured prediction tasks.
2 code implementations • 24 Jul 2023 • Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian
We propose Reinforcement Learning from Contrastive Distillation (RLCD), a method for aligning language models to follow principles expressed in natural language (e. g., to be more harmless) without using human feedback.
no code implementations • 27 Jun 2023 • Xiang 'Anthony' Chen, Jeff Burke, Ruofei Du, Matthew K. Hong, Jennifer Jacobs, Philippe Laban, DIngzeyu Li, Nanyun Peng, Karl D. D. Willis, Chien-Sheng Wu, Bolei Zhou
Through iterative, cross-disciplinary discussions, we define and propose next-steps for Human-centered Generative AI (HGAI).
no code implementations • 20 Jun 2023 • Sidi Lu, Wenbo Zhao, Chenyang Tao, Arpit Gupta, Shanchan Wu, Tagyoung Chung, Nanyun Peng
NeurAlly-Decomposed Oracle (NADO) is a powerful approach for controllable generation with large language models.
no code implementations • 20 Jun 2023 • Sidi Lu, Asli Celikyilmaz, Tianlu Wang, Nanyun Peng
We investigate MDM for open-domain text generation evaluation under two paradigms: 1) \emph{Generative} MDM, which leverages the Meta-Distribution Methods to generate in-domain negative samples for training discriminator-based metrics; 2) \emph{Discriminative} MDM, which directly uses distribution discrepancies between two language models for evaluation.
1 code implementation • 30 May 2023 • Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, YiWen Chen, Tagyoung Chung, Jing Huang, Nanyun Peng
Automatic melody-to-lyric generation is a task in which song lyrics are generated to go with a given melody.
no code implementations • 26 May 2023 • Paulina Toro Isaza, Guangxuan Xu, Akintoye Oloko, Yufang Hou, Nanyun Peng, Dakuo Wang
Social biases and stereotypes are embedded in our culture in part through their presence in our stories, as evidenced by the rich history of humanities and social science literature analyzing such biases in children stories.
1 code implementation • 26 May 2023 • I-Hung Hsu, Zhiyu Xie, Kuan-Hao Huang, Prem Natarajan, Nanyun Peng
However, existing generation-based EAE models mostly focus on problem re-formulation and prompt design, without incorporating additional information that has been shown to be effective for classification-based models, such as the abstract meaning representation (AMR) of the input passages.
no code implementations • 26 May 2023 • I-Hung Hsu, Avik Ray, Shubham Garg, Nanyun Peng, Jing Huang
In this work, we study the problem of synthesizing code-switched texts for language pairs absent from the training data.
1 code implementation • 24 May 2023 • Alexander Spangher, Nanyun Peng, Jonathan May, Emilio Ferrara
News articles are driven by the informational sources journalists use in reporting.
no code implementations • 24 May 2023 • Mingyu Derek Ma, Xiaoxuan Wang, Po-Nien Kung, P. Jeffrey Brantingham, Nanyun Peng, Wei Wang
Information extraction tasks such as event extraction require an in-depth understanding of the output structure and sub-task dependencies.
1 code implementation • 24 May 2023 • Haoyi Qiu, Zi-Yi Dou, Tianlu Wang, Asli Celikyilmaz, Nanyun Peng
Model-based evaluation metrics (e. g., CLIPScore and GPTScore) have demonstrated decent correlations with human judgments in various language generation tasks.
no code implementations • 23 May 2023 • Zi-Yi Dou, Feng Gao, Nanyun Peng
In this paper, we introduce a masked path modeling (MPM) objective, which pretrains an agent using self-collected data for downstream navigation tasks.
no code implementations • 19 May 2023 • Po-Nien Kung, Nanyun Peng
Our experiments show that models trained on simplified task definition or delusive examples can achieve comparable performance to the ones trained on the original instructions and examples.
no code implementations • 12 May 2023 • Yufei Tian, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, Tagyoung Chung, Jing Huang, Nanyun Peng
At inference time, we leverage the crucial alignments between melody and lyrics and compile the given melody into constraints to guide the generation process.
1 code implementation • 12 May 2023 • Sarik Ghazarian, Yijia Shao, Rujun Han, Aram Galstyan, Nanyun Peng
We take the first step by focusing on event commonsense that considers events and their relations, and is crucial in both dialogues and general commonsense reasoning.
1 code implementation • 15 Apr 2023 • Honghua Zhang, Meihua Dang, Nanyun Peng, Guy Van Den Broeck
To overcome this challenge, we propose to use tractable probabilistic models (TPMs) to impose lexical constraints in autoregressive text generation models, which we refer to as GeLaTo (Generating Language with Tractable Constraints).
no code implementations • 26 Jan 2023 • Mingyu Derek Ma, Jiun-Yu Kao, Shuyang Gao, Arpit Gupta, Di Jin, Tagyoung Chung, Nanyun Peng
Dialogue state tracking (DST) is an important step in dialogue management to keep track of users' beliefs.
no code implementations • 5 Jan 2023 • Alexander Spangher, Xinyu Hua, Yao Ming, Nanyun Peng
While GPT-2 generates sentences that are remarkably human-like, longer documents can ramble and do not follow human-like writing structure.
1 code implementation • CVPR 2023 • Xueyan Zou, Zi-Yi Dou, Jianwei Yang, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai, Harkirat Behl, JianFeng Wang, Lu Yuan, Nanyun Peng, Lijuan Wang, Yong Jae Lee, Jianfeng Gao
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly.
Ranked #4 on Instance Segmentation on ADE20K val (using extra training data)
1 code implementation • 20 Dec 2022 • Kevin Yang, Dan Klein, Nanyun Peng, Yuandong Tian
In human evaluations of automatically generated stories, DOC substantially outperforms a strong Re3 baseline (Yang et al., 2022) on plot coherence (22. 5% absolute gain), outline relevance (28. 2%), and interestingness (20. 7%).
1 code implementation • 3 Dec 2022 • Arshiya Aggarwal, Jiao Sun, Nanyun Peng
These fixed prefix templates could themselves be specific in terms of styles or linguistic structures, which may lead to unreliable fairness conclusions that are not representative of the general trends from tone varying prompts.
no code implementations • 25 Nov 2022 • Zhixuan Zhou, Jiao Sun, Jiaxin Pei, Nanyun Peng, JinJun Xiong
Our analysis further reveal stereotypical portrayals of both male and female characters in terms of moral foundations and events.
1 code implementation • 24 Oct 2022 • Yufei Tian, Divyanshu Sheth, Nanyun Peng
We propose a unified framework to generate both homophonic and homographic puns to resolve the split-up in existing works.
1 code implementation • 24 Oct 2022 • Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Tagyoung Chung, Jing Huang, Yang Liu, Nanyun Peng
The tasks of humor understanding and generation are challenging and subjective even for humans, requiring commonsense and real-world knowledge to master.
1 code implementation • 24 Oct 2022 • Jiao Sun, Anjali Narayan-Chen, Shereen Oraby, Shuyang Gao, Tagyoung Chung, Jing Huang, Yang Liu, Nanyun Peng
In this work, we propose a new task, context-situated pun generation, where a specific context represented by a set of keywords is provided, and the task is to first identify suitable pun words that are appropriate for the context, then generate puns based on the context keywords and the identified pun words.
1 code implementation • 22 Oct 2022 • Guangxuan Xu, Ruibo Liu, Fabrice Harel-Canada, Nischal Reddy Chandra, Nanyun Peng
We propose EnDex, the first human-reaction based model to evaluate dialogue engagingness.
2 code implementations • 16 Oct 2022 • Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, Nanyun Peng
This task requires machines to 1) understand long text inputs and 2) produce a globally consistent image sequence that illustrates the contents of the story.
1 code implementation • 13 Oct 2022 • Kevin Yang, Yuandong Tian, Nanyun Peng, Dan Klein
We consider the problem of automatically generating longer stories of over two thousand words.
no code implementations • 24 Sep 2022 • Nanyun Peng
Recent advances in large pre-trained language models have demonstrated strong results in generating natural languages and significantly improved performances for many natural language generation (NLG) applications such as machine translation and text summarization.
no code implementations • 17 Aug 2022 • Guangxuan Xu, Paulina Toro Isaza, Moshi Li, Akintoye Oloko, Bingsheng Yao, Cassia Sanctos, Aminat Adebiyi, Yufang Hou, Nanyun Peng, Dakuo Wang
To understand a narrative, it is essential to comprehend the temporal event flows, especially those associated with main characters; however, this can be challenging with lengthy and unstructured narrative texts.
no code implementations • 16 Aug 2022 • Mingyu Derek Ma, Alexander K. Taylor, Wei Wang, Nanyun Peng
Event extraction for the clinical domain is an under-explored research area.
1 code implementation • NeurIPS 2022 • Zi-Yi Dou, Aishwarya Kamath, Zhe Gan, Pengchuan Zhang, JianFeng Wang, Linjie Li, Zicheng Liu, Ce Liu, Yann Lecun, Nanyun Peng, Jianfeng Gao, Lijuan Wang
Vision-language (VL) pre-training has recently received considerable attention.
Ranked #1 on Phrase Grounding on Flickr30k Entities Dev
1 code implementation • 14 Jun 2022 • Alexander Spangher, Xiang Ren, Jonathan May, Nanyun Peng
News article revision histories provide clues to narrative and factual evolution in news articles.
3 code implementations • 9 Jun 2022 • Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu
BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.
1 code implementation • NAACL 2022 • Zi-Yi Dou, Nanyun Peng
The speaker-follower models have proven to be effective in vision-and-language navigation, where a speaker model is used to synthesize new instructions to augment the training data for a follower navigation model.
1 code implementation • 27 May 2022 • Tao Meng, Sidi Lu, Nanyun Peng, Kai-Wei Chang
We propose a general and efficient framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO).
1 code implementation • 25 May 2022 • Tanmay Parekh, I-Hung Hsu, Kuan-Hao Huang, Kai-Wei Chang, Nanyun Peng
We utilize this ontology to further introduce GENEVA, a diverse generalizability benchmarking dataset comprising four test suites, aimed at evaluating models' ability to handle limited data and unseen event type generalization.
no code implementations • 25 May 2022 • Te-Lin Wu, Caiqi Zhang, Qingyuan Hu, Alex Spangher, Nanyun Peng
The ability to infer pre- and postconditions of an action is vital for comprehending complex instructions, and is essential for applications such as autonomous instruction-guided agents and assistive AI that supports humans to perform physical tasks.
no code implementations • 25 May 2022 • Jiao Sun, Yu Hou, Jiin Kim, Nanyun Peng
Then, we collect human annotations for the helpfulness of dialogue responses based on our definition and build a classifier to automatically determine the helpfulness of a response.
1 code implementation • 25 May 2022 • I-Hung Hsu, Kuan-Hao Huang, Shuning Zhang, Wenxin Cheng, Premkumar Natarajan, Kai-Wei Chang, Nanyun Peng
In this work, we propose to take a unified view of all these tasks and introduce TAGPRIME to address relational structure extraction problems.
1 code implementation • Findings (ACL) 2022 • Fabrice Harel-Canada, Muhammad Ali Gulzar, Nanyun Peng, Miryung Kim
The vast majority of text transformation techniques in NLP are inherently limited in their ability to expand input space coverage due to an implicit constraint to preserve the original class label.
1 code implementation • NAACL 2022 • Rujun Han, Hong Chen, Yufei Tian, Nanyun Peng
Stories or narratives are comprised of a sequence of events.
1 code implementation • NAACL 2022 • Anirudh Mittal, Yufei Tian, Nanyun Peng
In this paper, we propose a simple yet effective way to generate pun sentences that does not require any training on existing puns.
1 code implementation • NAACL 2022 • Yufei Tian, Nanyun Peng
Poetry generation, and creative language generation in general, usually suffers from the lack of large training data.
1 code implementation • 26 Mar 2022 • Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Jia-Jun Li, Nora Bradford, Branda Sun, Tran Bao Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, Mark Warschauer
Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.
Ranked #1 on Question Generation on FairytaleQA
1 code implementation • ACL 2022 • Sarik Ghazarian, Nuan Wen, Aram Galstyan, Nanyun Peng
We also show that DEAM can distinguish between coherent and incoherent dialogues generated by baseline manipulations, whereas those baseline models cannot detect incoherent examples generated by DEAM.
1 code implementation • ACL 2022 • Kuan-Hao Huang, I-Hung Hsu, Premkumar Natarajan, Kai-Wei Chang, Nanyun Peng
We present a study on leveraging multilingual pre-trained generative language models for zero-shot cross-lingual event argument extraction (EAE).
1 code implementation • 1 Jan 2022 • Zi-Yi Dou, Nanyun Peng
In this paper, we instead focus on better utilizing the \textit{implicit knowledge} stored in pre-trained language models.
2 code implementations • CVPR 2022 • Zi-Yi Dou, Yichong Xu, Zhe Gan, JianFeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, Zicheng Liu, Michael Zeng
Vision-and-language (VL) pre-training has proven to be highly effective on various VL downstream tasks.
Ranked #20 on Cross-Modal Retrieval on COCO 2014 (using extra training data)
no code implementations • ACL 2022 • Te-Lin Wu, Alex Spangher, Pegah Alipoormolabashi, Marjorie Freedman, Ralph Weischedel, Nanyun Peng
The ability to sequence unordered events is an essential skill to comprehend and reason about real world task procedures, which often requires thorough understanding of temporal common sense and multimodal information, as these procedures are often communicated through a combination of texts and images.
1 code implementation • Findings (ACL) 2022 • Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, Minlie Huang
We propose a taxonomy for dialogue safety specifically designed to capture unsafe behaviors in human-bot dialogue settings, with focuses on context-sensitive unsafety, which is under-explored in prior works.
2 code implementations • NAACL 2022 • Vijit Malik, Sunipa Dev, Akihiro Nishi, Nanyun Peng, Kai-Wei Chang
Language representations are efficient tools used across NLP applications, but they are strife with encoded societal biases.
no code implementations • Findings (EMNLP) 2021 • Mingyu Derek Ma, Muhao Chen, Te-Lin Wu, Nanyun Peng
Taxonomies are valuable resources for many applications, but the limited coverage due to the expensive manual curation process hinders their general applicability.
1 code implementation • EMNLP 2021 • Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, Kai-Wei Chang
Commonsense is defined as the knowledge that is shared by everyone.
Ranked #1 on Visual Commonsense Reasoning on GD-VCR
Cultural Vocal Bursts Intensity Prediction Visual Commonsense Reasoning
1 code implementation • EMNLP 2021 • Kung-Hsiang Huang, Sam Tang, Nanyun Peng
Document-level entity-based extraction (EE), aiming at extracting entity-centric information such as entity roles and entity relations, is key to automatic knowledge acquisition from text corpora for various domains.
Ranked #1 on Role-filler Entity Extraction on MUC-4
1 code implementation • Findings (EMNLP) 2021 • Yufei Tian, Arvind Krishna Sridhar, Nanyun Peng
A hyperbole is an intentional and creative exaggeration not to be taken literally.
no code implementations • COLING 2022 • Xiaofei Sun, Yufei Tian, Yuxian Meng, Nanyun Peng, Fei Wu, Jiwei Li, Chun Fan
Then based on the paraphrase pairs produced by these UMT models, a unified surrogate model can be trained to serve as the final \sts model to generate paraphrases, which can be directly used for test in the unsupervised setup, or be finetuned on labeled datasets in the supervised setup.
2 code implementations • NAACL 2022 • I-Hung Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Prem Natarajan, Kai-Wei Chang, Nanyun Peng
Given a passage and a manually designed prompt, DEGREE learns to summarize the events mentioned in the passage into a natural sentence that follows a predefined pattern.
no code implementations • 7 Aug 2021 • Sunipa Dev, Emily Sheng, Jieyu Zhao, Aubrie Amstutz, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Akihiro Nishi, Nanyun Peng, Kai-Wei Chang
Recent studies show that Natural Language Processing (NLP) technologies propagate societal biases about demographic groups associated with attributes such as gender, race, and nationality.
1 code implementation • ACL 2021 • Jiao Sun, Nanyun Peng
Human activities can be seen as sequences of events, which are crucial to understanding societies.
1 code implementation • ACL 2021 • Kevin Stowe, Tuhin Chakrabarty, Nanyun Peng, Smaranda Muresan, Iryna Gurevych
Guided by conceptual metaphor theory, we propose to control the generation process by encoding conceptual mappings between cognitive domains to generate meaningful metaphoric expressions.
1 code implementation • Findings (ACL) 2021 • Shikhar Singh, Nuan Wen, Yu Hou, Pegah Alipoormolabashi, Te-Lin Wu, Xuezhe Ma, Nanyun Peng
To this end, we introduce a new commonsense reasoning benchmark dataset comprising natural language true/false statements, with each sample paired with its complementary counterpart, resulting in 4k sentence pairs.
no code implementations • NAACL 2021 • Emily Sheng, Kai-Wei Chang, Prem Natarajan, Nanyun Peng
Ad hominem attacks are those that target some feature of a person{'}s character instead of the position the person is maintaining.
1 code implementation • ACL 2021 • Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng
Technology for language generation has advanced rapidly, spurred by advancements in pre-training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner.
1 code implementation • 19 Apr 2021 • Alexander Spangher, Nanyun Peng, Jonathan May, Emilio Ferrara
Journalists publish statements provided by people, or \textit{sources} to contextualize current events, help voters make informed decisions, and hold powerful individuals accountable.
no code implementations • 19 Apr 2021 • Alexander Spangher, Nanyun Peng, Jonathan May, Emilio Ferrara
Journalists obtain "leads", or story ideas, by reading large corpora of government records: court cases, proposed bills, etc.
1 code implementation • 18 Apr 2021 • Emily Sheng, Josh Arnold, Zhou Yu, Kai-Wei Chang, Nanyun Peng
Dialogue systems in the form of chatbots and personal assistants are being increasingly integrated into people's lives.
1 code implementation • EMNLP 2021 • Kuan-Hao Huang, Wasi Uddin Ahmad, Nanyun Peng, Kai-Wei Chang
Pre-trained multilingual language encoders, such as multilingual BERT and XLM-R, show great potential for zero-shot cross-lingual transfer.
1 code implementation • 16 Apr 2021 • Rujun Han, I-Hung Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Peng
While these tasks partially evaluate machines' ability of narrative understanding, human-like reading comprehension requires the capability to process event-based information beyond arguments and temporal reasoning.
1 code implementation • NAACL 2021 • Sarik Ghazarian, Zixi Liu, Akash SM, Ralph Weischedel, Aram Galstyan, Nanyun Peng
We propose to tackle these issues by generating a more comprehensive set of implausible stories using {\em plots}, which are structured representations of controllable factors used to generate stories.
1 code implementation • NAACL 2021 • Tuhin Chakrabarty, Xurui Zhang, Smaranda Muresan, Nanyun Peng
Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning.
no code implementations • 12 Feb 2021 • Sidi Lu, Tao Meng, Nanyun Peng
We propose InsNet, an expressive insertion-based text generator with efficient training and flexible decoding (parallel or sequential).
no code implementations • NAACL 2021 • Sarik Ghazarian, Zixi Liu, Tuhin Chakrabarty, Xuezhe Ma, Aram Galstyan, Nanyun Peng
Having engaging and informative conversations with users is the utmost goal for open-domain conversational systems.
1 code implementation • NAACL 2021 • Mingyu Derek Ma, Jiao Sun, Mu Yang, Kung-Hsiang Huang, Nuan Wen, Shikhar Singh, Rujun Han, Nanyun Peng
We present EventPlus, a temporal event understanding pipeline that integrates various state-of-the-art event understanding components including event trigger and type detection, event argument detection, event duration and temporal relation extraction.
no code implementations • 1 Jan 2021 • I-Hung Hsu, Xiao Guo, Premkumar Natarajan, Nanyun Peng
The ability to capture complex linguistic structures and long-term dependencies among words in the passage is essential for discourse-level relation extraction (DRE) tasks.
2 code implementations • EMNLP 2021 • Rujun Han, Xiang Ren, Nanyun Peng
While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications.
Ranked #1 on Question Answering on Torque
1 code implementation • 28 Dec 2020 • Xiangci Li, Gully Burns, Nanyun Peng
Even for domain experts, it is a non-trivial task to verify a scientific claim by providing supporting or refuting evidence rationales.
no code implementations • 16 Dec 2020 • Te-Lin Wu, Shikhar Singh, Sayan Paul, Gully Burns, Nanyun Peng
We introduce a new dataset, MELINDA, for Multimodal biomEdicaL experImeNt methoD clAssification.
no code implementations • 10 Nov 2020 • Samar Haider, Luca Luceri, Ashok Deb, Adam Badawy, Nanyun Peng, Emilio Ferrara
Social media have been deliberately used for malicious purposes, including political manipulation and disinformation.
1 code implementation • 24 Oct 2020 • Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng
Ad hominem attacks are those that target some feature of a person's character instead of the position the person is maintaining.
no code implementations • NAACL (NUSE) 2021 • Kung-Hsiang Huang, Nanyun Peng
Fully understanding narratives often requires identifying events in the context of whole documents and modeling the event relations.
1 code implementation • 6 Oct 2020 • Wasi Uddin Ahmad, Nanyun Peng, Kai-Wei Chang
Recent progress in cross-lingual relation and event extraction use graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic sentence representations such that models trained on one language can be applied to other languages.
1 code implementation • EMNLP 2020 • Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, Mohit Iyyer
Systems for story generation are asked to produce plausible and enjoyable stories given an input context.
1 code implementation • EMNLP 2020 • Seraphina Goldfarb-Tarrant, Tuhin Chakrabarty, Ralph Weischedel, Nanyun Peng
Long-form narrative text generated from large language models manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Kung-Hsiang Huang, Mu Yang, Nanyun Peng
To better recognize the trigger words, each sentence is first grounded to a sentence graph based on a jointly modeled hierarchical knowledge graph from UMLS.
Ranked #2 on Event Extraction on GENIA
1 code implementation • EMNLP 2020 • Tuhin Chakrabarty, Smaranda Muresan, Nanyun Peng
We also show how replacing literal sentences with similes from our best model in machine generated stories improves evocativeness and leads to better acceptance by human judges.
1 code implementation • EMNLP 2020 • Rujun Han, Yichao Zhou, Nanyun Peng
Extracting event temporal relations is a critical task for information extraction and plays an important role in natural language understanding.
no code implementations • ACL 2020 • Tuhin Chakrabarty, Debanjan Ghosh, Smar Muresan, a, Nanyun Peng
We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro Szekely, Xiang Ren
In this paper, we augment a general commonsense QA framework with a knowledgeable path generator.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng
We present a general approach towards controllable societal biases in natural language generation (NLG).
no code implementations • EMNLP 2020 • Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, Dan Roth
A critical part of reading is being able to understand the temporal relationships between events described in a passage of text, even when those relationships are not explicitly stated.
Ranked #2 on Question Answering on Torque
1 code implementation • 28 Apr 2020 • Tuhin Chakrabarty, Debanjan Ghosh, Smaranda Muresan, Nanyun Peng
We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence.
1 code implementation • 10 Apr 2020 • Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, Nanyun Peng
Perspective differences exist among different cultures or languages.
2 code implementations • 4 Nov 2019 • Sarik Ghazarian, Ralph Weischedel, Aram Galstyan, Nanyun Peng
In this paper, we investigate the possibility and efficacy of estimating utterance-level engagement and define a novel metric, {\em predictive engagement}, for automatic evaluation of open-domain dialogue systems.
no code implementations • IJCNLP 2019 • James Mullenbach, Jonathan Gordon, Nanyun Peng, Jonathan May
This provides evidence that the amount of commonsense knowledge encoded in these language models does not extend far beyond that already baked into the word embeddings.
1 code implementation • 24 Oct 2019 • Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, Aram Galstyan
We study the bias in several state-of-the-art named entity recognition (NER) models---specifically, a difference in the ability to recognize male and female names as PERSON entity types.
1 code implementation • CONLL 2019 • Xiao Huang, Li Dong, Elizabeth Boschee, Nanyun Peng
Named entity recognition (NER) identifies typed entity mentions in raw text.
Ranked #14 on Named Entity Recognition (NER) on NCBI-disease
1 code implementation • CONLL 2019 • Rujun Han, I-Hung Hsu, Mu Yang, Aram Galstyan, Ralph Weischedel, Nanyun Peng
We propose a novel deep structured learning framework for event temporal relation extraction.
1 code implementation • CONLL 2019 • Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Kai-Wei Chang, Nanyun Peng
We conduct experiments on cross-lingual dependency parsing where we train a dependency parser on a source language and transfer it to a wide range of target languages.
1 code implementation • 18 Sep 2019 • Yiming Wang, Tongfei Chen, Hainan Xu, Shuoyang Ding, Hang Lv, Yiwen Shao, Nanyun Peng, Lei Xie, Shinji Watanabe, Sanjeev Khudanpur
We present Espresso, an open-source, modular, extensible end-to-end neural automatic speech recognition (ASR) toolkit based on the deep learning library PyTorch and the popular neural machine translation toolkit fairseq.
Ranked #1 on Speech Recognition on Hub5'00 CallHome
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
1 code implementation • EACL 2021 • Xiangci Li, Gully Burns, Nanyun Peng
We apply richly contextualized deep representation learning pre-trained on biomedical domain corpus to the analysis of scientific discourse structures and the extraction of "evidence fragments" (i. e., the text in the results section describing data presented in a specified subfigure) from a set of biomedical experimental research articles.
no code implementations • IJCNLP 2019 • Xiaolei Huang, Jonathan May, Nanyun Peng
While recent work has shown promising results on cross-lingual transfer from high-resource languages to low-resource languages, it is unclear what knowledge is transferred.
1 code implementation • IJCNLP 2019 • Tao Meng, Nanyun Peng, Kai-Wei Chang
Experiments show that the Lagrangian relaxation and posterior regularization inference improve the performances on 15 and 17 out of 19 target languages, respectively.
1 code implementation • IJCNLP 2019 • Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng
We present a systematic study of biases in natural language generation (NLG) by analyzing text generated from prompts that contain mentions of different demographic groups.
no code implementations • IJCNLP 2019 • Rujun Han, Qiang Ning, Nanyun Peng
We propose a joint event and temporal relation extraction model with shared representation learning and structured prediction.
Event Extraction Joint Event and Temporal Relation Extraction +4
no code implementations • 26 Apr 2019 • Rujun Han, Mengyue Liang, Bashar Alhafni, Nanyun Peng
In this work, we establish strong baselines for event temporal relation extraction on two under-explored story narrative datasets: Richer Event Description (RED) and Causal and Temporal Relation Scheme (CaTeRS).
no code implementations • WS 2019 • Sarik Ghazarian, Johnny Tian-Zheng Wei, Aram Galstyan, Nanyun Peng
Despite advances in open-domain dialogue systems, automatic evaluation of such systems is still a challenging problem.
2 code implementations • NAACL 2019 • He He, Nanyun Peng, Percy Liang
We tackle the problem of generating a pun sentence given a pair of homophones (e. g., "died" and "dyed").
1 code implementation • NAACL 2019 • Seraphina Goldfarb-Tarrant, Haining Feng, Nanyun Peng
We compare different varieties of interaction in story-writing, story-planning, and diversity controls under time constraints, and show that increased types of human collaboration at both planning and writing stages results in a 10-50% improvement in story quality as compared to less interactive baselines.
2 code implementations • 14 Nov 2018 • Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, Rui Yan
Automatic storytelling is challenging since it requires generating long, coherent natural language to describes a sensible sequence of events.
2 code implementations • NAACL 2019 • Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, Nanyun Peng
Different languages might have different word orders.
no code implementations • WS 2018 • Nanyun Peng, Marjan Ghazvininejad, Jonathan May, Kevin Knight
We present a general framework of analyzing existing story corpora to generate controllable and creative new stories.
no code implementations • NAACL 2018 • Xiang Ren, Nanyun Peng, William Yang Wang
In today{'}s information-based society, there is abundant knowledge out there carried in the form of natural language texts (e. g., news articles, social media posts, scientific publications), which spans across various domains (e. g., corporate documents, advertisements, legal acts, medical reports), which grows at an astonishing rate.
3 code implementations • ACL 2018 • Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, Eduard Hovy
Combining pointer networks~\citep{vinyals2015pointer} with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion.
Ranked #14 on Dependency Parsing on Penn Treebank
no code implementations • 21 Apr 2018 • Wasi Uddin Ahmad, Xueying Bai, Zhechao Huang, Chao Jiang, Nanyun Peng, Kai-Wei Chang
Learning distributed sentence representations is one of the key challenges in natural language processing.
2 code implementations • 18 Nov 2017 • Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, Rui Yan
Results show that the proposed content preservation metric is highly correlate to human judgments, and the proposed models are able to generate sentences with higher style transfer strength and similar content preservation score comparing to auto-encoder.
Ranked #5 on Unsupervised Text Style Transfer on Yelp
no code implementations • IJCNLP 2017 • Dingquan Wang, Nanyun Peng, Kevin Duh
We show how to adapt bilingual word embeddings (BWE{'}s) to bootstrap a cross-lingual name-entity recognition (NER) system in a language with no labeled data.
no code implementations • TACL 2017 • Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, Wen-tau Yih
Past work in relation extraction has focused on binary relations in single sentences.
no code implementations • WS 2017 • Nanyun Peng, Mark Dredze
Many domain adaptation approaches rely on learning cross domain shared representations to transfer the knowledge learned in one domain to other domains.
no code implementations • ACL 2016 • Nanyun Peng, Mark Dredze
Named entity recognition, and other information extraction tasks, frequently use linguistic features such as part of speech tags or chunkings.
no code implementations • TACL 2015 • Ryan Cotterell, Nanyun Peng, Jason Eisner
Given some surface word types of a concatenative language along with the abstract morpheme sequences that they express, we show how to recover consistent underlying forms for these morphemes, together with the (stochastic) phonology that maps each concatenation of underlying forms to a surface form.