no code implementations • EMNLP (NLP4ConvAI) 2021 • Pei Zhou, Behnam Hedayatnia, Karthik Gopalakrishnan, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur
We further investigate can such models identify when to generate implicit background knowledge and when it is not necessary.
2 code implementations • 6 Feb 2024 • Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V. Le, Ed H. Chi, Denny Zhou, Swaroop Mishra, Huaixiu Steven Zheng
We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods.
1 code implementation • 19 Oct 2023 • Aman Madaan, Pranjal Aggarwal, Ankit Anand, Srividya Pranavi Potharaju, Swaroop Mishra, Pei Zhou, Aditya Gupta, Dheeraj Rajagopal, Karthik Kappaganthu, Yiming Yang, Shyam Upadhyay, Mausam, Manaal Faruqui
Large language models (LLMs) are now available from cloud API providers in various sizes and configurations.
no code implementations • 4 Oct 2023 • Pei Zhou, Aman Madaan, Srividya Pranavi Potharaju, Aditya Gupta, Kevin R. McKee, Ari Holtzman, Jay Pujara, Xiang Ren, Swaroop Mishra, Aida Nematzadeh, Shyam Upadhyay, Manaal Faruqui
We propose a new evaluation paradigm for large language models (LLMs): Thinking for Doing (T4D), which requires models to connect inferences about others' mental states to actions in social scenarios.
1 code implementation • 20 Dec 2022 • Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, Yejin Choi
Data scarcity has been a long standing issue in the field of open-domain social dialogue.
no code implementations • 20 Dec 2022 • Pei Zhou, Andrew Zhu, Jennifer Hu, Jay Pujara, Xiang Ren, Chris Callison-Burch, Yejin Choi, Prithviraj Ammanabrolu
We propose a novel task, G4C, to study teacher-student natural language interactions in a goal-driven and grounded environment.
no code implementations • 16 Nov 2022 • Pei Zhou, Hyundong Cho, Pegah Jandaghi, Dong-Ho Lee, Bill Yuchen Lin, Jay Pujara, Xiang Ren
Human communication relies on common ground (CG), the mutual knowledge and beliefs shared by participants, to produce coherent and interesting conversations.
1 code implementation • 16 Sep 2022 • Zhiping Xiao, Jeffrey Zhu, Yining Wang, Pei Zhou, Wen Hong Lam, Mason A. Porter, Yizhou Sun
We examine a variety of applications and we thereby demonstrate the effectiveness of our PEM model.
no code implementations • 19 Jan 2022 • Lee Kezar, Pei Zhou
There is little prior work on quantifying the relationships between facial expressions and emotionality in American Sign Language.
no code implementations • 31 Dec 2021 • Dawei Wang, Lingping Gao, Ziquan Lan, Wei Li, Jiaping Ren, Jiahui Zhang, Peng Zhang, Pei Zhou, Shengao Wang, Jia Pan, Dinesh Manocha, Ruigang Yang
Recently, there have been many advances in autonomous driving society, attracting a lot of attention from academia and industry.
no code implementations • ACL 2022 • Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur
Implicit knowledge, such as common sense, is key to fluid human conversations.
1 code implementation • SIGDIAL (ACL) 2021 • Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur
Moreover, existing dialogue datasets do not explicitly focus on exhibiting commonsense as a facet.
no code implementations • 11 Jun 2021 • Pei Zhou, Rengheng Zhang, Nianqiang Li, Zhidong Jiang, Shilong Pan
This paper presents a novel microwave photonic (MWP) radar scheme that is capable of optically generating and processing broadband linear frequency-modulated (LFM) microwave signals without using any radio-frequency (RF) sources.
no code implementations • EMNLP (DeeLIO) 2020 • Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, Dilek Hakkani-Tur
Pretrained language models have excelled at many NLP tasks recently; however, their social intelligence is still unsatisfactory.
no code implementations • 12 May 2021 • Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, Dilek Hakkani-Tur
Towards improving language models' social intelligence, we focus on the Social IQA dataset, a task requiring social and emotional commonsense reasoning.
no code implementations • Findings (EMNLP) 2021 • Pei Zhou, Pegah Jandaghi, Bill Yuchen Lin, Justin Cho, Jay Pujara, Xiang Ren
Humans use commonsense reasoning (CSR) implicitly to produce natural and coherent responses in conversations.
no code implementations • EMNLP 2021 • Ninareh Mehrabi, Pei Zhou, Fred Morstatter, Jay Pujara, Xiang Ren, Aram Galstyan
In addition, we analyze two downstream models that use ConceptNet as a source for commonsense knowledge and find the existence of biases in those models as well.
no code implementations • EMNLP 2021 • Pei Zhou, Rahul Khanna, Seyeon Lee, Bill Yuchen Lin, Daniel Ho, Jay Pujara, Xiang Ren
Pre-trained language models (PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, Xiang Ren
In this paper, we present a constrained text generation task, CommonGen associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning.
Ranked #1 on Text Generation on CommonGen
no code implementations • IJCNLP 2019 • Weijia Shi, Muhao Chen, Pei Zhou, Kai-Wei Chang
Contextualized word embedding models, such as ELMo, generate meaningful representations of words and their context.
1 code implementation • IJCNLP 2019 • Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, Kai-Wei Chang
Recent studies have shown that word embeddings exhibit gender bias inherited from the training corpora.
no code implementations • 4 Dec 2018 • Pei Zhou, Muhao Chen, Kai-Wei Chang, Carlo Zaniolo
Quantifying differences in terminologies from various academic domains has been a longstanding problem yet to be solved.