no code implementations • 14 Apr 2024 • Xiaoshu Chen, Sihang Zhou, Ke Liang, Xinwang Liu
Chain of thought finetuning aims to endow small student models with reasoning capacity to improve their performance towards a specific task by allowing them to imitate the reasoning procedure of large language models (LLMs) beyond simply predicting the answer to the question.
1 code implementation • 27 Feb 2023 • Xiaoshu Chen, Xiangsheng Li, Kunliang Wei, Bin Hu, Lei Jiang, Zeqian Huang, Zhanhui Kang
Eliminating examination bias accurately is pivotal to apply click-through data to train an unbiased ranking model.
1 code implementation • 27 Feb 2023 • Xiangsheng Li, Xiaoshu Chen, Kunliang Wei, Bin Hu, Lei Jiang, Zeqian Huang, Zhanhui Kang
Pre-trained language models have achieved great success in various large-scale information retrieval tasks.
no code implementations • 10 Oct 2020 • Yanwen Chong, Congchong Nie, Yulong Tao, Xiaoshu Chen, Shaoming Pan
In order to solve the above problem, we propose a hierarchical context network to differentially model homogeneous pixels with strong correlations and heterogeneous pixels with weak correlations.