no code implementations • 19 Feb 2025 • Jiahao Liu, Dongsheng Li, Hansu Gu, Peng Zhang, Tun Lu, Li Shang, Ning Gu
Recommender systems often suffer from popularity bias, where frequently interacted items are overrepresented in recommendations.
no code implementations • 19 Feb 2025 • Jiahao Liu, Shengkang Gu, Dongsheng Li, Guangping Zhang, Mingzhe Han, Hansu Gu, Peng Zhang, Tun Lu, Li Shang, Ning Gu
Large Language Model (LLM)-based user agents have emerged as a powerful tool for improving recommender systems by simulating user interactions.
no code implementations • 19 Feb 2025 • Jiahao Liu, Xueshuo Yan, Dongsheng Li, Guangping Zhang, Hansu Gu, Peng Zhang, Tun Lu, Li Shang, Ning Gu
Current recommendation systems powered by large language models (LLMs) often underutilize their reasoning capabilities due to a lack of explicit logical structuring.
no code implementations • 5 Nov 2024 • Bei Li, Tong Zheng, Rui Wang, Jiahao Liu, Qingyan Guo, Junliang Guo, Xu Tan, Tong Xiao, Jingbo Zhu, Jingang Wang, Xunliang Cai
First, we introduce a predictor-corrector learning framework to minimize truncation errors, which consists of a high-order predictor and a multistep corrector.
no code implementations • 27 Oct 2024 • Pengfei Wu, Jiahao Liu, Zhuocheng Gong, Qifan Wang, Jinpeng Li, Jingang Wang, Xunliang Cai, Dongyan Zhao
Recent advancements in Large Language Models (LLMs) have shown remarkable performance across a wide range of tasks.
no code implementations • 7 Oct 2024 • Jiahao Liu, YiYang Shao, Peng Zhang, Dongsheng Li, Hansu Gu, Chao Chen, Longzhi Du, Tun Lu, Ning Gu
Personalized algorithms can inadvertently expose users to discomforting recommendations, potentially triggering negative consequences.
2 code implementations • 24 Sep 2024 • Taowen Wang, Yiyang Liu, James Chenhao Liang, Junhan Zhao, Yiming Cui, Yuning Mao, Shaoliang Nie, Jiahao Liu, Fuli Feng, Zenglin Xu, Cheng Han, Lifu Huang, Qifan Wang, Dongfang Liu
Instruction tuning has emerged as an effective strategy for achieving zero-shot generalization by finetuning pretrained models on diverse multimodal tasks.
1 code implementation • 28 Aug 2024 • Danlong Yuan, Jiahao Liu, Bei Li, Huishuai Zhang, Jingang Wang, Xunliang Cai, Dongyan Zhao
While the Mamba architecture demonstrates superior inference efficiency and competitive performance on short-context natural language processing (NLP) tasks, empirical evidence suggests its capacity to comprehend long contexts is limited compared to transformer-based models.
no code implementations • 26 Aug 2024 • Bohan Li, Jiahao Liu, Yifeng Xiong, Junsheng Mu, Pei Xiao, Sheng Chen
Once the UAV passes the initial operating position, the UAV's trajectory and resource allocation are optimized during the mission period to maximize the end-to-end communication rate under the constraint of minimum sensing QoS.
no code implementations • 24 Jul 2024 • Lyuzhou Luo, Hao Wu, Jiahao Liu, Keshuang Tang, Chaopeng Tan
Eventually, to leverage the LPR data sufficiently, we extend our approach to multi-lane scenarios, where the problem can be converted to a weighted general exact coverage problem and solved by a backtracking algorithm with heuristics.
no code implementations • 23 Jul 2024 • Zhuocheng Gong, Jiahao Liu, Ziyue Wang, Pengfei Wu, Jingang Wang, Xunliang Cai, Dongyan Zhao, Rui Yan
We apply GSD across a range of LLMs, including a 70-billion parameter LLaMA-2 model, and observe a remarkable speedup of 1. 73$\times$ to 1. 96$\times$, significantly surpassing standard speculative decoding.
no code implementations • 10 Jun 2024 • Li Yang, Qifan Wang, Jianfeng Chi, Jiahao Liu, Jingang Wang, Fuli Feng, Zenglin Xu, Yi Fang, Lifu Huang, Dongfang Liu
Specifically, we employ a heavy encoder to separately encode the product context and attribute.
no code implementations • 6 Jun 2024 • Jiahao Liu, Qifan Wang, Jingang Wang, Xunliang Cai
The recent advancements in large language models (LLMs) have been extraordinary, yet the escalating inference costs associated with them present challenges in real-world applications.
no code implementations • 18 Apr 2024 • Pengfei Wu, Jiahao Liu, Zhuocheng Gong, Qifan Wang, Jinpeng Li, Jingang Wang, Xunliang Cai, Dongyan Zhao
In this paper, we propose a novel parallel decoding approach, namely \textit{hidden transfer}, which decodes multiple successive tokens simultaneously in a single forward pass.
no code implementations • 11 Mar 2024 • Zhuocheng Gong, Jiahao Liu, Jingang Wang, Xunliang Cai, Dongyan Zhao, Rui Yan
Our findings reveal several connections between the properties of perturbations and LLM performance, providing insights into the failure cases of uniform quantization and suggesting potential solutions to improve the robustness of LLM quantization.
no code implementations • 17 Feb 2024 • Ying Mo, Jiahao Liu, Jian Yang, Qifan Wang, Shun Zhang, Jingang Wang, Zhoujun Li
There has been increasing interest in exploring the capabilities of advanced large language models (LLMs) in the field of information extraction (IE), specifically focusing on tasks related to named entity recognition (NER) and relation extraction (RE).
no code implementations • 5 Feb 2024 • Jiahao Liu, Jun Zeng, Fabio Pierazzi, Lorenzo Cavallaro, Zhenkai Liang
Android malware detection serves as the front line against malicious apps.
no code implementations • 3 Jan 2024 • Cuiwei Liu, Jiahao Liu, Huaijun Qiu, Zhaokui Li, Xiangbin Shi
Previous works map images captured by UAVs and satellites to a shared feature space and employ a classification framework to learn location-dependent features while neglecting the overall distribution shift between the UAV view and the satellite view.
no code implementations • 30 Oct 2023 • Zhuocheng Gong, Jiahao Liu, Qifan Wang, Jingang Wang, Xunliang Cai, Dongyan Zhao, Rui Yan
The effectiveness of ICL can be attributed to the strong language modeling capabilities of large language models (LLMs), which enable them to learn the mapping between input and labels based on in-context demonstrations.
no code implementations • 24 Oct 2023 • Jiduan Liu, Jiahao Liu, Qifan Wang, Jingang Wang, Xunliang Cai, Dongyan Zhao, Ran Lucien Wang, Rui Yan
In particular, our approach extracts knowledge from LLMs to construct a knowledge store, from which the small-scale model can retrieve relevant information and leverage it for effective inference.
no code implementations • 17 Aug 2023 • Ying Mo, Jian Yang, Jiahao Liu, Qifan Wang, Ruoyu Chen, Jingang Wang, Zhoujun Li
A multi-view contrastive learning framework is introduced to encompass semantic contrasts between source, codeswitched, and target sentences, as well as contrasts among token-to-token relations.
1 code implementation • 14 Aug 2023 • Sijia Liu, Jiahao Liu, Hansu Gu, Dongsheng Li, Tun Lu, Peng Zhang, Ning Gu
Sequential recommendation demonstrates the capability to recommend items by modeling the sequential behavior of users.
no code implementations • 29 Jul 2023 • Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Jiongran Wu, Peng Zhang, Li Shang, Ning Gu
We conducted comprehensive experiments to validate the effectiveness of IMCorrect and the results demonstrate that IMCorrect is superior in completeness, utility, and efficiency, and is applicable in many recommendation unlearning scenarios.
1 code implementation • 11 Jun 2023 • Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Yang Yang, Hongyin Tang, Keqing He, Jiahao Liu, Jingang Wang, Shu Zhao, Peng Zhang, Jie Tang
Currently, the reduction in the parameter scale of large-scale pre-trained language models (PLMs) through knowledge distillation has greatly facilitated their widespread deployment on various devices.
no code implementations • 30 May 2023 • Zhuocheng Gong, Jiahao Liu, Qifan Wang, Yang Yang, Jingang Wang, Wei Wu, Yunsen Xian, Dongyan Zhao, Rui Yan
While transformer-based pre-trained language models (PLMs) have dominated a number of NLP applications, these models are heavy to deploy and expensive to use.
1 code implementation • 26 May 2023 • Jiduan Liu, Jiahao Liu, Qifan Wang, Jingang Wang, Wei Wu, Yunsen Xian, Dongyan Zhao, Kai Chen, Rui Yan
In this paper, we propose a novel approach, RankCSE, for unsupervised sentence representation learning, which incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
1 code implementation • 20 May 2023 • Chen Zhang, Yang Yang, Jiahao Liu, Jingang Wang, Yunsen Xian, Benyou Wang, Dawei Song
However, when the capacity gap between the teacher and the student is large, a curse of capacity gap appears, invoking a deficiency in distilling LMs.
1 code implementation • 10 May 2023 • Jiahao Liu, Jiang Wu, Jinyu Chen, Miao Hu, Yipeng Zhou, Di wu
In this paper, we propose a new PFL algorithm called \emph{FedDWA (Federated Learning with Dynamic Weight Adjustment)} to address the above problem, which leverages the parameter server (PS) to compute personalized aggregation weights based on collected models from clients.
no code implementations • 23 Apr 2023 • Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Li Shang, Ning Gu
Specifically, TriSIM4Rec consists of 1) a dynamic ideal low-pass graph filter to dynamically mine co-occurrence information in user-item interactions, which is implemented by incremental singular value decomposition (SVD); 2) a parameter-free attention module to capture sequential information of user interactions effectively and efficiently; and 3) an item transition matrix to store the transition probabilities of item pairs.
no code implementations • 20 Mar 2023 • Ying Mo, Hongyin Tang, Jiahao Liu, Qifan Wang, Zenglin Xu, Jingang Wang, Wei Wu, Zhoujun Li
There are three types of NER tasks, including flat, nested and discontinuous entity recognition.
no code implementations • 13 Feb 2023 • Shushi Chen, Leilei Huang, Jiahao Liu, Chao Liu, Yibo Fan
In this context, this paper proposes an error-surface-based FME algorithm and the corresponding hardware implementation.
no code implementations • 4 Feb 2023 • Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Li Shang, Ning Gu
However, the interaction signal may not be sufficient to accurately characterize user interests and the low-pass filters may ignore the useful information contained in the high-frequency component of the observed signals, resulting in suboptimal accuracy.
1 code implementation • 15 Oct 2022 • Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Ning Gu
Dynamic interaction graphs have been widely adopted to model the evolution of user-item interactions over time.
1 code implementation • 29 May 2022 • Chen Zhang, Yang Yang, Qifan Wang, Jiahao Liu, Jingang Wang, Wei Wu, Dawei Song
In particular, motivated by the finding that the performance of the student is positively correlated to the scale-performance tradeoff of the teacher assistant, MiniDisc is designed with a $\lambda$-tradeoff to measure the optimality of the teacher assistant without trial distillation to the student.
no code implementations • 18 Apr 2022 • Jiduan Liu, Jiahao Liu, Yang Yang, Jingang Wang, Wei Wu, Dongyan Zhao, Rui Yan
To enhance the performance of dense retrieval models without loss of efficiency, we propose a GNN-encoder model in which query (passage) information is fused into passage (query) representations via graph neural networks that are constructed by queries and their top retrieved passages.
1 code implementation • ACL 2021 • Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, Luo Si
Existing work in multilingual pretraining has demonstrated the potential of cross-lingual transferability by training a unified Transformer encoder for multiple languages.
no code implementations • 28 Sep 2020 • Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, Luo Si
Recent studies about learning multilingual representations have achieved significant performance gains across a wide range of downstream cross-lingual tasks.
no code implementations • 8 Nov 2019 • Jiahao Liu, Guixiang Ma, Fei Jiang, Chun-Ta Lu, Philip S. Yu, Ann B. Ragin
Specifically, we use graph convolutions to learn the structural and functional joint embedding, where the graph structure is defined with structural connectivity and node features are from the functional connectivity.
no code implementations • 18 Dec 2015 • Bing Qin, Duyu Tang, Xinwei Geng, Dandan Ning, Jiahao Liu, Ting Liu
Generating an article automatically with computer program is a challenging task in artificial intelligence and natural language processing.