1 code implementation • 14 Mar 2025 • Xinyu Tang, Xiaolei Wang, Zhihao Lv, Yingqian Min, Wayne Xin Zhao, Binbin Hu, Ziqi Liu, Zhiqiang Zhang
This motivates us to investigate whether long CoT reasoning is a general capability for LLMs.
1 code implementation • 28 Jan 2025 • Xiaolei Wang, Tomasz Woźniak
The R package bsvarSIGNs implements state-of-the-art algorithms for the Bayesian analysis of Structural Vector Autoregressions identified by sign, zero, and narrative restrictions.
no code implementations • 31 Dec 2024 • Xiaolei Wang, Xiaoyang Wang, Huihui Bai, Eng Gee Lim, Jimin Xiao
Existing unsupervised distillation-based methods rely on the differences between encoded and decoded features to locate abnormal regions in test images.
1 code implementation • 26 Oct 2024 • Xinyu Tang, Xiaolei Wang, Wayne Xin Zhao, Ji-Rong Wen
They assume that problems are from the same task and traverse them in a random order.
1 code implementation • 28 Jun 2024 • Yutao Zhu, Kun Zhou, Kelong Mao, Wentong Chen, Yiding Sun, Zhipeng Chen, Qian Cao, Yihan Wu, Yushuo Chen, Feng Wang, Lei Zhang, Junyi Li, Xiaolei Wang, Lei Wang, Beichen Zhang, Zican Dong, Xiaoxue Cheng, Yuhan Chen, Xinyu Tang, Yupeng Hou, Qiangqiang Ren, Xincheng Pang, Shufang Xie, Wayne Xin Zhao, Zhicheng Dou, Jiaxin Mao, Yankai Lin, Ruihua Song, Jun Xu, Xu Chen, Rui Yan, Zhewei Wei, Di Hu, Wenbing Huang, Ze-Feng Gao, Yueguo Chen, Weizheng Lu, Ji-Rong Wen
This paper presents the development of YuLan, a series of open-source LLMs with $12$ billion parameters.
1 code implementation • 20 Jun 2024 • Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Ji-Rong Wen
The emergence of in-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) for recognizing the task from demonstrations and utilizing pre-trained priors, and task learning (TL) for learning from demonstrations.
no code implementations • 11 Mar 2024 • Xiaolei Wang, Chen Yang, Yuzhen Feng, Luohan Hu, Zhengbing He
For on-demand dynamic ride-pooling services, e. g., Uber Pool and Didi Pinche, a well-designed vehicle dispatching strategy is crucial for platform profitability and passenger experience.
1 code implementation • 27 Feb 2024 • Xinyu Tang, Xiaolei Wang, Wayne Xin Zhao, Siyuan Lu, Yaliang Li, Ji-Rong Wen
By systematically analyzing a rich set of improvement strategies on the two aspects, we further develop a capable Gradient-inspired LLM-based Prompt Optimizer called GPO.
1 code implementation • 26 Feb 2024 • Tianyi Tang, Wenyang Luo, Haoyang Huang, Dongdong Zhang, Xiaolei Wang, Xin Zhao, Furu Wei, Ji-Rong Wen
Large language models (LLMs) demonstrate remarkable multilingual capabilities without being pre-trained on specially curated multilingual parallel corpora.
1 code implementation • 3 Feb 2024 • Ruotian Ma, Xiaolei Wang, Xin Zhou, Jian Li, Nan Du, Tao Gui, Qi Zhang, Xuanjing Huang
Despite the success, the underlying mechanism of this approach remains unexplored, and the true effectiveness of LLMs as Prompt Optimizers requires further validation.
1 code implementation • 26 Aug 2023 • Jianqiang Xia, Dianxi Shi, Ke Song, Linna Song, Xiaolei Wang, Songchang Jin, Li Zhou, Yu Cheng, Lei Jin, Zheng Zhu, Jianan Li, Gang Wang, Junliang Xing, Jian Zhao
With this structure, the network can extract fusion features of the template and search region under the mutual interaction of modalities.
Ranked #5 on
Rgb-T Tracking
on GTOT
1 code implementation • 21 Jul 2023 • Zhipeng Zhao, Kun Zhou, Xiaolei Wang, Wayne Xin Zhao, Fan Pan, Zhao Cao, Ji-Rong Wen
Conversational recommender systems (CRS) aim to provide the recommendation service via natural language conversations.
1 code implementation • 5 Jun 2023 • Xiaolei Wang, Kun Zhou, Xinyu Tang, Wayne Xin Zhao, Fan Pan, Zhao Cao, Ji-Rong Wen
To develop our approach, we characterize user preference and organize the conversation flow by the entities involved in the dialogue, and design a multi-stage recommendation dialogue simulator based on a conversation flow language model.
1 code implementation • 22 May 2023 • Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, Ji-Rong Wen
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs), which rely on natural language conversations to satisfy user needs.
6 code implementations • 31 Mar 2023 • Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, YiFan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, Ji-Rong Wen
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
1 code implementation • 19 Jun 2022 • Xiaolei Wang, Kun Zhou, Ji-Rong Wen, Wayne Xin Zhao
Our approach unifies the recommendation and conversation subtasks into the prompt learning paradigm, and utilizes knowledge-enhanced prompts based on a fixed pre-trained language model (PLM) to fulfill both subtasks in a unified approach.
Ranked #1 on
Text Generation
on ReDial
1 code implementation • ACL 2021 • Kun Zhou, Xiaolei Wang, Yuanhang Zhou, Chenzhan Shang, Yuan Cheng, Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen
In recent years, conversational recommender system (CRS) has received much attention in the research community.
no code implementations • 31 Dec 2019 • Tianhan Liu, Xiaolei Wang, Hailong Wang, Gang Shi, Fan Gao, Honglei Feng, Haoyun Deng, Longqian Hu, Eric Lochner, Pedro Schlottmann, Stephan von Molnár, Yongqing Li, Jianhua Zhao, Peng Xiong
Electrical generation of polarized spins in nonmagnetic materials is of great interest for the underlying physics and device potential.
Applied Physics Strongly Correlated Electrons