Search Results for author: Haofei Yu

Found 11 papers, 9 papers with code

Table as Thought: Exploring Structured Thoughts in LLM Reasoning

no code implementations4 Jan 2025 Zhenjie Sun, Naihao Deng, Haofei Yu, Jiaxuan You

Large language models' reasoning abilities benefit from methods that organize their thought processes, such as chain-of-thought prompting, which employs a sequential structure to guide the reasoning process step-by-step.

Mathematical Reasoning

ResearchTown: Simulator of Human Research Community

1 code implementation23 Dec 2024 Haofei Yu, Zhaochen Hong, Zirui Cheng, Kunlun Zhu, Keyang Xuan, Jinwei Yao, Tao Feng, Jiaxuan You

Our experiments reveal three key findings: (1) ResearchTown can provide a realistic simulation of collaborative research activities, including paper writing and review writing; (2) ResearchTown can maintain robust simulation with multiple researchers and diverse papers; (3) ResearchTown can generate interdisciplinary research ideas that potentially inspire novel research directions.

In-Context Learning May Not Elicit Trustworthy Reasoning: A-Not-B Errors in Pretrained Language Models

1 code implementation23 Sep 2024 Pengrui Han, Peiyang Song, Haofei Yu, Jiaxuan You

Recent advancements in artificial intelligence have led to the creation of highly capable large language models (LLMs) that can perform tasks in a human-like manner.

In-Context Learning

HEMM: Holistic Evaluation of Multimodal Foundation Models

1 code implementation3 Jul 2024 Paul Pu Liang, Akshay Goindani, Talha Chafekar, Leena Mathur, Haofei Yu, Ruslan Salakhutdinov, Louis-Philippe Morency

Through comprehensive experiments across the 30 tasks in HEMM, we (1) identify key dataset dimensions (e. g., basic skills, information flows, and use cases) that pose challenges to today's models, and (2) distill performance trends regarding how different modeling dimensions (e. g., scale, pre-training data, multimodal alignment, pre-training, and instruction tuning objectives) influence performance.

SOTOPIA-$π$: Interactive Learning of Socially Intelligent Language Agents

1 code implementation13 Mar 2024 Ruiyi Wang, Haofei Yu, Wenxin Zhang, Zhengyang Qi, Maarten Sap, Graham Neubig, Yonatan Bisk, Hao Zhu

Motivated by this gap, we propose an interactive learning method, SOTOPIA-$\pi$, improving the social intelligence of language agents.

Language Modeling Language Modelling +2

TRAMS: Training-free Memory Selection for Long-range Language Modeling

1 code implementation24 Oct 2023 Haofei Yu, Cunxiang Wang, Yue Zhang, Wei Bi

The Transformer architecture is crucial for numerous AI models, but it still faces challenges in long-range language modeling.

Language Modeling Language Modelling

SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents

2 code implementations18 Oct 2023 Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, Maarten Sap

We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and evaluate their social intelligence.

RFiD: Towards Rational Fusion-in-Decoder for Open-Domain Question Answering

1 code implementation26 May 2023 Cunxiang Wang, Haofei Yu, Yue Zhang

Open-Domain Question Answering (ODQA) systems necessitate a reader model capable of generating answers by simultaneously referring to multiple passages.

Decoder Natural Questions +2

Uni-Encoder: A Fast and Accurate Response Selection Paradigm for Generation-Based Dialogue Systems

1 code implementation2 Jun 2021 Chiyu Song, Hongliang He, Haofei Yu, Pengfei Fang, Leyang Cui, Zhenzhong Lan

The current state-of-the-art ranking methods mainly use an encoding paradigm called Cross-Encoder, which separately encodes each context-candidate pair and ranks the candidates according to their fitness scores.

Computational Efficiency Conversational Response Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.