1 code implementation • 9 Jun 2024 • Fangxu Yu, Lai Jiang, Haoqiang Kang, Shibo Hao, Lianhui Qin
To fill this gap, we propose Flow of Reasoning (FoR), an efficient diversity-seeking LLM finetuning method aimed at improving reasoning quality and diversity with minimal data.
2 code implementations • 20 Feb 2024 • Qianqian Xie, Weiguang Han, Zhengyu Chen, Ruoyu Xiang, Xiao Zhang, Yueru He, Mengxi Xiao, Dong Li, Yongfu Dai, Duanyu Feng, Yijing Xu, Haoqiang Kang, Ziyan Kuang, Chenhan Yuan, Kailai Yang, Zheheng Luo, Tianlin Zhang, Zhiwei Liu, Guojun Xiong, Zhiyang Deng, Yuechen Jiang, Zhiyuan Yao, Haohang Li, Yangyang Yu, Gang Hu, Jiajia Huang, Xiao-Yang Liu, Alejandro Lopez-Lira, Benyou Wang, Yanzhao Lai, Hao Wang, Min Peng, Sophia Ananiadou, Jimin Huang
Our evaluation of 15 representative LLMs, including GPT-4, ChatGPT, and the latest Gemini, reveals several key findings: While LLMs excel in IE and textual analysis, they struggle with advanced reasoning and complex tasks like text generation and forecasting.
no code implementations • 16 Feb 2024 • Haoqiang Kang, Terra Blevins, Luke Zettlemoyer
While many hallucination detection techniques have been evaluated on English text, their effectiveness in multilingual contexts remains unknown.
no code implementations • 27 Nov 2023 • Haoqiang Kang, Xiao-Yang Liu
In this paper, we provide an empirical examination of LLMs' hallucination behaviors in financial tasks.
1 code implementation • 15 Nov 2023 • Haoqiang Kang, Juntong Ni, Huaxiu Yao
Large Language Models (LLMs) have demonstrated remarkable proficiency in generating fluent text.
no code implementations • 26 Apr 2023 • Haoqiang Kang, Terra Blevins, Luke Zettlemoyer
To better understand this contrast, we present a new study investigating how well PLMs capture cross-lingual word sense with Contextual Word-Level Translation (C-WLT), an extension of word-level translation that prompts the model to translate a given word in context.