1 code implementation • EMNLP 2021 • Jiaao Chen, Diyi Yang
Abstractive conversation summarization has received growing attention while most current state-of-the-art summarization models heavily rely on human-annotated summaries.
Abstractive Dialogue Summarization
Conversation Summarization
+1
no code implementations • Findings (ACL) 2022 • Kexun Zhang, Jiaao Chen, Diyi Yang
Automatic email to-do item generation is the task of generating to-do items from a given email to help people overview emails and schedule daily work.
no code implementations • 26 Dec 2024 • Jiaao Chen, Diyi Yang
Compared with previous work which learns from human-curated and static data in random orders, we propose to first automatically generate and organize the training data by mimicking the learning pathways of human and then dynamically tailor the training data based on the training dynamics.
no code implementations • 2 Dec 2024 • Rui Ye, Xianghe Pang, Jingyi Chai, Jiaao Chen, Zhenfei Yin, Zhen Xiang, Xiaowen Dong, Jing Shao, Siheng Chen
However, the unchecked adoption of LLMs poses significant risks to the integrity of the peer review system.
1 code implementation • 25 Jun 2024 • Zhehao Zhang, Jiaao Chen, Diyi Yang
In this work, we introduce Dynamic Evaluation of LLMs via Adaptive Reasoning Graph Evolvement (DARG) to dynamically extend current benchmarks with controlled complexity and diversity.
no code implementations • 16 Nov 2023 • Yanchen Liu, Mingyu Derek Ma, Wenna Qin, Azure Zhou, Jiaao Chen, Weiyan Shi, Wei Wang, Diyi Yang
Using COVID-19 as a testbed domain, our experiments demonstrate a significant alignment between the susceptibility scores estimated by our computational modeling and human judgments, confirming the effectiveness of this latent modeling approach.
1 code implementation • 31 Oct 2023 • Jiaao Chen, Diyi Yang
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data, however, this process might suffer from privacy issues and violations of data protection regulations.
1 code implementation • 29 Sep 2023 • Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, Xing Xie
Moreover, DyVal-generated samples are not only evaluation sets, but also helpful data for fine-tuning to improve the performance of LLMs on existing benchmarks.
no code implementations • 1 Aug 2023 • Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, Jianshu Chen
We investigate how to elicit compositional generalization capabilities in large language models (LLMs).
Ranked #34 on
Math Word Problem Solving
on MATH
1 code implementation • 12 Apr 2023 • Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, Diyi Yang
We conclude that the performance of today's LLMs can augment the CSS research pipeline in two ways: (1) serving as zero-shot data annotators on human annotation teams, and (2) bootstrapping challenging creative generation tasks (e. g., explaining the underlying attributes of a text).
1 code implementation • 10 Apr 2023 • Jiaao Chen, Aston Zhang, Mu Li, Alex Smola, Diyi Yang
Diffusion models that are based on iterative denoising have been recently proposed and leveraged in various generation tasks like image generation.
1 code implementation • 8 Feb 2023 • Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, Diyi Yang
Spurred by advancements in scale, large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot -- i. e., without adaptation on downstream data.
no code implementations • 4 Jan 2023 • Jiaao Chen, Aston Zhang, Xingjian Shi, Mu Li, Alex Smola, Diyi Yang
We discover the following design patterns: (i) group layers in a spindle pattern; (ii) allocate the number of trainable parameters to layers uniformly; (iii) tune all the groups; (iv) assign proper tuning strategies to different groups.
no code implementations • 19 Dec 2022 • Jiaao Chen, Mohan Dodda, Diyi Yang
Specifically, we ask humans to highlight the salient information to be included in summaries to provide the local feedback , and to make overall comparisons among summaries in terms of coherence, accuracy, coverage, concise and overall quality, as the global feedback.
1 code implementation • 31 Oct 2022 • Raj Sanjay Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du, Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, Diyi Yang
To this end, we contribute the Financial Language Understanding Evaluation (FLUE), an open-source comprehensive suite of benchmarks for the financial domain.
1 code implementation • ACL 2022 • Caleb Ziems, Jiaao Chen, Camille Harris, Jessica Anderson, Diyi Yang
To understand disparities in current models and to facilitate more dialect-competent NLU systems, we introduce the VernAcular Language Understanding Evaluation (VALUE) benchmark, a challenging variant of GLUE that we created with a set of lexical and morphosyntactic transformation rules.
1 code implementation • Findings (ACL) 2022 • Aaron Reich, Jiaao Chen, Aastha Agrawal, Yanzhe Zhang, Diyi Yang
We found that state-of-the-art NER systems trained on CoNLL 2003 training data drop performance dramatically on our challenging set.
1 code implementation • ACL 2021 • Jiaao Chen, Dinghan Shen, Weizhu Chen, Diyi Yang
Fine-tuning large pre-trained models with task-specific data has achieved great success in NLP.
no code implementations • 14 Jun 2021 • Jiaao Chen, Derek Tam, Colin Raffel, Mohit Bansal, Diyi Yang
NLP has achieved great progress in the past decade through the use of neural models and large labeled datasets.
1 code implementation • 31 May 2021 • Jiaao Chen, Dinghan Shen, Weizhu Chen, Diyi Yang
Fine-tuning large pre-trained models with task-specific data has achieved great success in NLP.
1 code implementation • NAACL 2021 • Jiaao Chen, Diyi Yang
Abstractive conversation summarization has received much attention recently.
1 code implementation • NAACL 2021 • Yufan Huang, Yanzhe Zhang, Jiaao Chen, Xuezhi Wang, Diyi Yang
Continual learning has become increasingly important as it enables NLP models to constantly learn and gain knowledge over time.
1 code implementation • 16 Jan 2021 • Jiaao Chen, Diyi Yang
Modeling persuasive language has the potential to better facilitate our decision-making processes.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Omar Shaikh, Jiaao Chen, Jon Saad-Falcon, Duen Horng Chau, Diyi Yang
We find that specific (orderings of) strategies interact uniquely with a request's content to impact success rate, and thus the persuasiveness of a request.
1 code implementation • EMNLP 2020 • Jiaao Chen, Diyi Yang
Text summarization is one of the most challenging and interesting problems in NLP.
1 code implementation • EMNLP 2020 • Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, Diyi Yang
Named Entity Recognition (NER) is one of the first stages in deep language understanding yet current NER models heavily rely on human-annotated data.
2 code implementations • ACL 2020 • Jiaao Chen, Zichao Yang, Diyi Yang
This paper presents MixText, a semi-supervised learning method for text classification, which uses our newly designed data augmentation method called TMix.
1 code implementation • 23 Apr 2020 • Jiaao Chen, Yuwei Wu, Diyi Yang
We present semi-supervised models with data augmentation (SMDA), a semi-supervised text classification system to classify interactive affective responses.
no code implementations • NAACL 2019 • Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, Eduard Hovy
Modeling what makes a request persuasive - eliciting the desired response from a reader - is critical to the study of propaganda, behavioral economics, and advertising.
no code implementations • 1 Nov 2018 • Jiaao Chen, Jianshu Chen, Zhou Yu
The ability to select an appropriate story ending is the first step towards perfect narrative comprehension.