Search Results for author: Pei Zhou

Found 22 papers, 7 papers with code

Self-Discover: Large Language Models Self-Compose Reasoning Structures

2 code implementations6 Feb 2024 Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V. Le, Ed H. Chi, Denny Zhou, Swaroop Mishra, Huaixiu Steven Zheng

We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods.

Math

How FaR Are Large Language Models From Agents with Theory-of-Mind?

no code implementations4 Oct 2023 Pei Zhou, Aman Madaan, Srividya Pranavi Potharaju, Aditya Gupta, Kevin R. McKee, Ari Holtzman, Jay Pujara, Xiang Ren, Swaroop Mishra, Aida Nematzadeh, Shyam Upadhyay, Manaal Faruqui

We propose a new evaluation paradigm for large language models (LLMs): Thinking for Doing (T4D), which requires models to connect inferences about others' mental states to actions in social scenarios.

In-Context Learning Question Answering

Reflect, Not Reflex: Inference-Based Common Ground Improves Dialogue Response Quality

no code implementations16 Nov 2022 Pei Zhou, Hyundong Cho, Pegah Jandaghi, Dong-Ho Lee, Bill Yuchen Lin, Jay Pujara, Xiang Ren

Human communication relies on common ground (CG), the mutual knowledge and beliefs shared by participants, to produce coherent and interesting conversations.

Response Generation

The Role of Facial Expressions and Emotion in ASL

no code implementations19 Jan 2022 Lee Kezar, Pei Zhou

There is little prior work on quantifying the relationships between facial expressions and emotionality in American Sign Language.

An RF-source-free microwave photonic radar with an optically injected semiconductor laser for high-resolution detection and imaging

no code implementations11 Jun 2021 Pei Zhou, Rengheng Zhang, Nianqiang Li, Zhidong Jiang, Shilong Pan

This paper presents a novel microwave photonic (MWP) radar scheme that is capable of optically generating and processing broadband linear frequency-modulated (LFM) microwave signals without using any radio-frequency (RF) sources.

Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense

no code implementations12 May 2021 Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, Dilek Hakkani-Tur

Towards improving language models' social intelligence, we focus on the Social IQA dataset, a task requiring social and emotional commonsense reasoning.

Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources

no code implementations EMNLP 2021 Ninareh Mehrabi, Pei Zhou, Fred Morstatter, Jay Pujara, Xiang Ren, Aram Galstyan

In addition, we analyze two downstream models that use ConceptNet as a source for commonsense knowledge and find the existence of biases in those models as well.

RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms

no code implementations EMNLP 2021 Pei Zhou, Rahul Khanna, Seyeon Lee, Bill Yuchen Lin, Daniel Ho, Jay Pujara, Xiang Ren

Pre-trained language models (PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated.

Retrofitting Contextualized Word Embeddings with Paraphrases

no code implementations IJCNLP 2019 Weijia Shi, Muhao Chen, Pei Zhou, Kai-Wei Chang

Contextualized word embedding models, such as ELMo, generate meaningful representations of words and their context.

Sentence Sentence Classification +1

Quantification and Analysis of Scientific Language Variation Across Research Fields

no code implementations4 Dec 2018 Pei Zhou, Muhao Chen, Kai-Wei Chang, Carlo Zaniolo

Quantifying differences in terminologies from various academic domains has been a longstanding problem yet to be solved.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.