1 code implementation • 29 Oct 2024 • Kexun Zhang, Shang Zhou, Danqing Wang, William Yang Wang, Lei LI
To scale up inference efficiently with a limited compute, it is crucial to find an optimal allocation for sample compute budgets: Which sampling configurations (model, temperature, language, etc.)
no code implementations • 25 Oct 2024 • Danqing Wang, Zhuorui Ye, Fei Fang, Lei LI
However, the lack of effective cooperation between LLM agents hinders their performance, especially for multi-step reasoning tasks.
1 code implementation • 2 Oct 2024 • Danqing Wang, Jianxin Ma, Fei Fang, Lei LI
Despite significant advancements in the reasoning capabilities of Large Language Models (LLMs), the lack of diverse reasoning solutions often makes them trapped in a limited solution search area.
1 code implementation • 19 Jun 2024 • Danqing Wang, Antonis Antoniades, Kha-Dinh Luong, Edwin Zhang, Mert Kosan, Jiachen Li, Ambuj Singh, William Yang Wang, Lei LI
RLHEX provides a flexible framework to incorporate different human-designed principles into the counterfactual explanation generation process, aligning these explanations with domain expertise.
no code implementations • 13 Oct 2023 • Hanlin Zhu, Andrew Cohen, Danqing Wang, Kevin Yang, Xiaomeng Yang, Jiantao Jiao, Yuandong Tian
Story plots, while short, carry most of the essential information of a full story that may contain tens of thousands of words.
no code implementations • 5 Oct 2023 • Danqing Wang, Kevin Yang, Hanlin Zhu, Xiaomeng Yang, Andrew Cohen, Lei LI, Yuandong Tian
Recent research has increasingly focused on evaluating large language models' (LLMs) alignment with diverse human values and preferences, particularly for open-ended tasks like story generation.
1 code implementation • NeurIPS 2023 • Kexun Zhang, Danqing Wang, Jingtao Xia, William Yang Wang, Lei LI
To address these challenges, we propose ALGO, a framework that synthesizes Algorithmic programs with LLM-Generated Oracles to guide the generation and verify their correctness.
1 code implementation • 23 May 2023 • Danqing Wang, Lei LI
In this paper, we propose Study Assistant for Large LAnguage Model (SALAM), a novel framework with an auxiliary agent to assist the main LLM in learning from mistakes through interactive cooperation.
2 code implementations • 23 May 2023 • Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Yang Wang, Lei LI
By harnessing both explicit human instruction and the implicit knowledge of GPT-4, we fine-tune a text evaluation metric based on LLaMA, producing both a score for generated text and a human readable diagnostic report.
1 code implementation • 28 Jan 2023 • Danqing Wang, Fei Ye, Hao Zhou
The development of general protein and antibody-specific pre-trained language models both facilitate antibody prediction tasks.
1 code implementation • 28 Nov 2022 • Danqing Wang, Zeyu Wen, Fei Ye, Lei LI, Hao Zhou
By sampling in the latent space, LSSAMP can simultaneously generate peptides with ideal sequence attributes and secondary structures.
no code implementations • 21 Oct 2021 • Danqing Wang, Jiaze Chen, Xianze Wu, Hao Zhou, Lei LI
In this paper, we present a large-scale Chinese news summarization dataset CNewSum, which consists of 304, 307 documents and human-written summaries for the news feed.
no code implementations • 29 Sep 2021 • Danqing Wang, Zeyu Wen, Lei LI, Hao Zhou
By sampling in the latent secondary structure space, we can generate peptides with ideal amino acids and secondary structures at the same time.
1 code implementation • Findings (NAACL) 2022 • Yiran Chen, Zhenqiao Song, Xianze Wu, Danqing Wang, Jingjing Xu, Jiaze Chen, Hao Zhou, Lei LI
We introduce MTG, a new benchmark suite for training and evaluating multilingual text generation.
1 code implementation • 7 Apr 2021 • Chenxin An, Ming Zhong, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang
Previous work for text summarization in scientific domain mainly focused on the content of the input document, but seldom considering its citation network.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiran Chen, PengFei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang
In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora.
1 code implementation • ACL 2020 • Danqing Wang, PengFei Liu, Yining Zheng, Xipeng Qiu, Xuanjing Huang
An intuitive way is to put them in the graph-based neural network, which has a more complex structure for capturing inter-sentence relationships.
2 code implementations • ACL 2020 • Ming Zhong, PengFei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
Ranked #1 on
Text Summarization
on BBC XSum
no code implementations • WS 2019 • Ming Zhong, Danqing Wang, PengFei Liu, Xipeng Qiu, Xuanjing Huang
In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models.
no code implementations • 30 Aug 2019 • Danqing Wang, PengFei Liu, Ming Zhong, Jie Fu, Xipeng Qiu, Xuanjing Huang
Although domain shift has been well explored in many NLP applications, it still has received little attention in the domain of extractive text summarization.
2 code implementations • ACL 2019 • Ming Zhong, PengFei Liu, Danqing Wang, Xipeng Qiu, Xuanjing Huang
The recent years have seen remarkable success in the use of deep neural networks on text summarization.
Ranked #6 on
Extractive Text Summarization
on CNN / Daily Mail