1 code implementation • 21 Apr 2025 • Yilun Zhou, Austin Xu, Peifeng Wang, Caiming Xiong, Shafiq Joty
Scaling test-time computation, or affording a generator large language model (LLM) extra compute during inference, typically employs the help of external non-generative evaluators (i. e., reward models).
1 code implementation • 9 Oct 2024 • Yixin Liu, Kejian Shi, Alexander R. Fabbri, Yilun Zhao, Peifeng Wang, Chien-Sheng Wu, Shafiq Joty, Arman Cohan
The automatic evaluation of instruction following typically involves using large language models (LLMs) to assess response quality.
no code implementations • 23 Sep 2024 • Peifeng Wang, Austin Xu, Yilun Zhou, Caiming Xiong, Shafiq Joty
Auto-evaluation is crucial for assessing response quality and offering feedback for model development.
1 code implementation • 3 May 2023 • Peifeng Wang, Zhengyang Wang, Zheng Li, Yifan Gao, Bing Yin, Xiang Ren
While CoT can yield dramatically improved performance, such gains are only observed for sufficiently large LMs.
1 code implementation • 3 Nov 2022 • Peifeng Wang, Aaron Chan, Filip Ilievski, Muhao Chen, Xiang Ren
Neural language models (LMs) have achieved impressive results on various language-based reasoning tasks by utilizing latent knowledge encoded in their own pretrained parameters.
1 code implementation • ICLR 2022 • Peifeng Wang, Jonathan Zamora, Junfeng Liu, Filip Ilievski, Muhao Chen, Xiang Ren
In this paper, we propose an Imagine-and-Verbalize (I&V) method, which learns to imagine a relational scene knowledge graph (SKG) with relations between the input concepts, and leverage the SKG as a constraint when generating a plausible scene description.
1 code implementation • Findings (ACL) 2021 • Peifeng Wang, Filip Ilievski, Muhao Chen, Xiang Ren
Inspired by evidence that pretrained language models (LMs) encode commonsense knowledge, recent work has applied LMs to automatically populate commonsense knowledge graphs (CKGs).
1 code implementation • ICLR 2021 • Mrigank Raman, Aaron Chan, Siddhant Agarwal, Peifeng Wang, Hansen Wang, Sungchul Kim, Ryan Rossi, Handong Zhao, Nedim Lipka, Xiang Ren
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation.
1 code implementation • EMNLP 2020 • Changlong Yu, Jialong Han, Peifeng Wang, Yangqiu Song, Hongming Zhang, Wilfred Ng, Shuming Shi
We also demonstrate that distributional methods are ideal to make up for pattern-based ones in such cases.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro Szekely, Xiang Ren
In this paper, we augment a general commonsense QA framework with a knowledgeable path generator.
2 code implementations • EMNLP 2020 • Yanlin Feng, Xinyue Chen, Bill Yuchen Lin, Peifeng Wang, Jun Yan, Xiang Ren
Existing work on augmenting question answering (QA) models with external knowledge (e. g., knowledge graphs) either struggle to model multi-hop relations efficiently, or lack transparency into the model's prediction rationale.
1 code implementation • 4 Nov 2018 • Peifeng Wang, Jialong Han, Chenliang Li, Rong pan
Recent efforts on this issue suggest training a neighborhood aggregator in conjunction with the conventional entity and relation embeddings, which may help embed new entities inductively via their existing neighbors.
no code implementations • 23 Sep 2018 • Peifeng Wang, Shuangyin Li, Rong pan
In this GAN-based framework, we take advantage of a generator to obtain high-quality negative samples.