Search Results for author: Jiahao Liu

Found 39 papers, 10 papers with code

Mitigating Popularity Bias in Collaborative Filtering through Fair Sampling

no code implementations19 Feb 2025 Jiahao Liu, Dongsheng Li, Hansu Gu, Peng Zhang, Tun Lu, Li Shang, Ning Gu

Recommender systems often suffer from popularity bias, where frequently interacted items are overrepresented in recommendations.

Collaborative Filtering Fairness +1

Enhancing Cross-Domain Recommendations with Memory-Optimized LLM-Based User Agents

no code implementations19 Feb 2025 Jiahao Liu, Shengkang Gu, Dongsheng Li, Guangping Zhang, Mingzhe Han, Hansu Gu, Peng Zhang, Tun Lu, Li Shang, Ning Gu

Large Language Model (LLM)-based user agents have emerged as a powerful tool for improving recommender systems by simulating user interactions.

Language Modeling Language Modelling +2

Enhancing LLM-Based Recommendations Through Personalized Reasoning

no code implementations19 Feb 2025 Jiahao Liu, Xueshuo Yan, Dongsheng Li, Guangping Zhang, Hansu Gu, Peng Zhang, Tun Lu, Li Shang, Ning Gu

Current recommendation systems powered by large language models (LLMs) often underutilize their reasoning capabilities due to a lack of explicit logical structuring.

Recommendation Systems

Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning

no code implementations5 Nov 2024 Bei Li, Tong Zheng, Rui Wang, Jiahao Liu, Qingyan Guo, Junliang Guo, Xu Tan, Tong Xiao, Jingbo Zhu, Jingang Wang, Xunliang Cai

First, we introduce a predictor-corrector learning framework to minimize truncation errors, which consists of a high-order predictor and a multistep corrector.

Abstractive Text Summarization Language Modeling +4

FIRP: Faster LLM inference via future intermediate representation prediction

no code implementations27 Oct 2024 Pengfei Wu, Jiahao Liu, Zhuocheng Gong, Qifan Wang, Jinpeng Li, Jingang Wang, Xunliang Cai, Dongyan Zhao

Recent advancements in Large Language Models (LLMs) have shown remarkable performance across a wide range of tasks.

Prediction

Filtering Discomforting Recommendations with Large Language Models

no code implementations7 Oct 2024 Jiahao Liu, YiYang Shao, Peng Zhang, Dongsheng Li, Hansu Gu, Chao Chen, Longzhi Du, Tun Lu, Ning Gu

Personalized algorithms can inadvertently expose users to discomforting recommendations, potentially triggering negative consequences.

Language Modeling Language Modelling +1

M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning

2 code implementations24 Sep 2024 Taowen Wang, Yiyang Liu, James Chenhao Liang, Junhan Zhao, Yiming Cui, Yuning Mao, Shaoliang Nie, Jiahao Liu, Fuli Feng, Zenglin Xu, Cheng Han, Lifu Huang, Qifan Wang, Dongfang Liu

Instruction tuning has emerged as an effective strategy for achieving zero-shot generalization by finetuning pretrained models on diverse multimodal tasks.

Zero-shot Generalization

ReMamba: Equip Mamba with Effective Long-Sequence Modeling

1 code implementation28 Aug 2024 Danlong Yuan, Jiahao Liu, Bei Li, Huishuai Zhang, Jingang Wang, Xunliang Cai, Dongyan Zhao

While the Mamba architecture demonstrates superior inference efficiency and competitive performance on short-context natural language processing (NLP) tasks, empirical evidence suggests its capacity to comprehend long contexts is limited compared to transformer-based models.

Mamba

UAV-Enabled Integrated Sensing and Communication in Maritime Emergency Networks

no code implementations26 Aug 2024 Bohan Li, Jiahao Liu, Yifeng Xiong, Junsheng Mu, Pei Xiao, Sheng Chen

Once the UAV passes the initial operating position, the UAV's trajectory and resource allocation are optimized during the mission period to maximize the end-to-end communication rate under the constraint of minimum sensing QoS.

Integrated sensing and communication ISAC +1

A Probabilistic Approach for Queue Length Estimation Using License Plate Recognition Data: Considering Overtaking in Multi-lane Scenarios

no code implementations24 Jul 2024 Lyuzhou Luo, Hao Wu, Jiahao Liu, Keshuang Tang, Chaopeng Tan

Eventually, to leverage the LPR data sufficiently, we extend our approach to multi-lane scenarios, where the problem can be converted to a weighted general exact coverage problem and solved by a backtracking algorithm with heuristics.

License Plate Recognition

Graph-Structured Speculative Decoding

no code implementations23 Jul 2024 Zhuocheng Gong, Jiahao Liu, Ziyue Wang, Pengfei Wu, Jingang Wang, Xunliang Cai, Dongyan Zhao, Rui Yan

We apply GSD across a range of LLMs, including a 70-billion parameter LLaMA-2 model, and observe a remarkable speedup of 1. 73$\times$ to 1. 96$\times$, significantly surpassing standard speculative decoding.

Language Modelling Small Language Model

Speculative Decoding via Early-exiting for Faster LLM Inference with Thompson Sampling Control Mechanism

no code implementations6 Jun 2024 Jiahao Liu, Qifan Wang, Jingang Wang, Xunliang Cai

The recent advancements in large language models (LLMs) have been extraordinary, yet the escalating inference costs associated with them present challenges in real-world applications.

Thompson Sampling

Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration

no code implementations18 Apr 2024 Pengfei Wu, Jiahao Liu, Zhuocheng Gong, Qifan Wang, Jinpeng Li, Jingang Wang, Xunliang Cai, Dongyan Zhao

In this paper, we propose a novel parallel decoding approach, namely \textit{hidden transfer}, which decodes multiple successive tokens simultaneously in a single forward pass.

Language Modeling Language Modelling +1

What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation

no code implementations11 Mar 2024 Zhuocheng Gong, Jiahao Liu, Jingang Wang, Xunliang Cai, Dongyan Zhao, Rui Yan

Our findings reveal several connections between the properties of perturbations and LLM performance, providing insights into the failure cases of uniform quantization and suggesting potential solutions to improve the robustness of LLM quantization.

Computational Efficiency Quantization

C-ICL: Contrastive In-context Learning for Information Extraction

no code implementations17 Feb 2024 Ying Mo, Jiahao Liu, Jian Yang, Qifan Wang, Shun Zhang, Jingang Wang, Zhoujun Li

There has been increasing interest in exploring the capabilities of advanced large language models (LLMs) in the field of information extraction (IE), specifically focusing on tasks related to named entity recognition (NER) and relation extraction (RE).

In-Context Learning Miscellaneous +4

View Distribution Alignment with Progressive Adversarial Learning for UAV Visual Geo-Localization

no code implementations3 Jan 2024 Cuiwei Liu, Jiahao Liu, Huaijun Qiu, Zhaokui Li, Xiangbin Shi

Previous works map images captured by UAVs and satellites to a shared feature space and employ a classification framework to learn location-dependent features while neglecting the overall distribution shift between the UAV view and the satellite view.

geo-localization

Improving Input-label Mapping with Demonstration Replay for In-context Learning

no code implementations30 Oct 2023 Zhuocheng Gong, Jiahao Liu, Qifan Wang, Jingang Wang, Xunliang Cai, Dongyan Zhao, Rui Yan

The effectiveness of ICL can be attributed to the strong language modeling capabilities of large language models (LLMs), which enable them to learn the mapping between input and labels based on in-context demonstrations.

In-Context Learning Language Modeling +1

Retrieval-based Knowledge Transfer: An Effective Approach for Extreme Large Language Model Compression

no code implementations24 Oct 2023 Jiduan Liu, Jiahao Liu, Qifan Wang, Jingang Wang, Xunliang Cai, Dongyan Zhao, Ran Lucien Wang, Rui Yan

In particular, our approach extracts knowledge from LLMs to construct a knowledge store, from which the small-scale model can retrieve relevant information and leverage it for effective inference.

Language Modeling Language Modelling +4

mCL-NER: Cross-Lingual Named Entity Recognition via Multi-view Contrastive Learning

no code implementations17 Aug 2023 Ying Mo, Jian Yang, Jiahao Liu, Qifan Wang, Ruoyu Chen, Jingang Wang, Zhoujun Li

A multi-view contrastive learning framework is introduced to encompass semantic contrasts between source, codeswitched, and target sentences, as well as contrasts among token-to-token relations.

Contrastive Learning named-entity-recognition +2

AutoSeqRec: Autoencoder for Efficient Sequential Recommendation

1 code implementation14 Aug 2023 Sijia Liu, Jiahao Liu, Hansu Gu, Dongsheng Li, Tun Lu, Peng Zhang, Ning Gu

Sequential recommendation demonstrates the capability to recommend items by modeling the sequential behavior of users.

Collaborative Filtering Computational Efficiency +1

Recommendation Unlearning via Matrix Correction

no code implementations29 Jul 2023 Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Jiongran Wu, Peng Zhang, Li Shang, Ning Gu

We conducted comprehensive experiments to validate the effectiveness of IMCorrect and the results demonstrate that IMCorrect is superior in completeness, utility, and efficiency, and is applicable in many recommendation unlearning scenarios.

Collaborative Filtering Recommendation Systems

GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model

1 code implementation11 Jun 2023 Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Yang Yang, Hongyin Tang, Keqing He, Jiahao Liu, Jingang Wang, Shu Zhao, Peng Zhang, Jie Tang

Currently, the reduction in the parameter scale of large-scale pre-trained language models (PLMs) through knowledge distillation has greatly facilitated their widespread deployment on various devices.

General Knowledge Knowledge Distillation +2

PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language Models

no code implementations30 May 2023 Zhuocheng Gong, Jiahao Liu, Qifan Wang, Yang Yang, Jingang Wang, Wei Wu, Yunsen Xian, Dongyan Zhao, Rui Yan

While transformer-based pre-trained language models (PLMs) have dominated a number of NLP applications, these models are heavy to deploy and expensive to use.

parameter-efficient fine-tuning Quantization

RankCSE: Unsupervised Sentence Representations Learning via Learning to Rank

1 code implementation26 May 2023 Jiduan Liu, Jiahao Liu, Qifan Wang, Jingang Wang, Wei Wu, Yunsen Xian, Dongyan Zhao, Kai Chen, Rui Yan

In this paper, we propose a novel approach, RankCSE, for unsupervised sentence representation learning, which incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.

Contrastive Learning Learning-To-Rank +4

Lifting the Curse of Capacity Gap in Distilling Language Models

1 code implementation20 May 2023 Chen Zhang, Yang Yang, Jiahao Liu, Jingang Wang, Yunsen Xian, Benyou Wang, Dawei Song

However, when the capacity gap between the teacher and the student is large, a curse of capacity gap appears, invoking a deficiency in distilling LMs.

Knowledge Distillation

FedDWA: Personalized Federated Learning with Dynamic Weight Adjustment

1 code implementation10 May 2023 Jiahao Liu, Jiang Wu, Jinyu Chen, Miao Hu, Yipeng Zhou, Di wu

In this paper, we propose a new PFL algorithm called \emph{FedDWA (Federated Learning with Dynamic Weight Adjustment)} to address the above problem, which leverages the parameter server (PS) to compute personalized aggregation weights based on collected models from clients.

Personalized Federated Learning

Triple Structural Information Modelling for Accurate, Explainable and Interactive Recommendation

no code implementations23 Apr 2023 Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Li Shang, Ning Gu

Specifically, TriSIM4Rec consists of 1) a dynamic ideal low-pass graph filter to dynamically mine co-occurrence information in user-item interactions, which is implemented by incremental singular value decomposition (SVD); 2) a parameter-free attention module to capture sequential information of user interactions effectively and efficiently; and 3) an item transition matrix to store the transition probabilities of item pairs.

Collaborative Filtering

An Error-Surface-Based Fractional Motion Estimation Algorithm and Hardware Implementation for VVC

no code implementations13 Feb 2023 Shushi Chen, Leilei Huang, Jiahao Liu, Chao Liu, Yibo Fan

In this context, this paper proposes an error-surface-based FME algorithm and the corresponding hardware implementation.

4k 8k +1

Personalized Graph Signal Processing for Collaborative Filtering

no code implementations4 Feb 2023 Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Li Shang, Ning Gu

However, the interaction signal may not be sufficient to accurately characterize user interests and the low-pass filters may ignore the useful information contained in the high-frequency component of the observed signals, resulting in suboptimal accuracy.

Collaborative Filtering

Parameter-free Dynamic Graph Embedding for Link Prediction

1 code implementation15 Oct 2022 Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Ning Gu

Dynamic interaction graphs have been widely adopted to model the evolution of user-item interactions over time.

Attribute Dynamic graph embedding +2

MiniDisc: Minimal Distillation Schedule for Language Model Compression

1 code implementation29 May 2022 Chen Zhang, Yang Yang, Qifan Wang, Jiahao Liu, Jingang Wang, Wei Wu, Dawei Song

In particular, motivated by the finding that the performance of the student is positively correlated to the scale-performance tradeoff of the teacher assistant, MiniDisc is designed with a $\lambda$-tradeoff to measure the optimality of the teacher assistant without trial distillation to the student.

Knowledge Distillation Language Modeling +3

GNN-encoder: Learning a Dual-encoder Architecture via Graph Neural Networks for Dense Passage Retrieval

no code implementations18 Apr 2022 Jiduan Liu, Jiahao Liu, Yang Yang, Jingang Wang, Wei Wu, Dongyan Zhao, Rui Yan

To enhance the performance of dense retrieval models without loss of efficiency, we propose a GNN-encoder model in which query (passage) information is fused into passage (query) representations via graph neural networks that are constructed by queries and their top retrieved passages.

Natural Questions Passage Retrieval +2

VECO: Variable and Flexible Cross-lingual Pre-training for Language Understanding and Generation

1 code implementation ACL 2021 Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, Luo Si

Existing work in multilingual pretraining has demonstrated the potential of cross-lingual transferability by training a unified Transformer encoder for multiple languages.

Language Modelling Question Answering +5

VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation

no code implementations28 Sep 2020 Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, Luo Si

Recent studies about learning multilingual representations have achieved significant performance gains across a wide range of downstream cross-lingual tasks.

Decoder Language Modeling +8

Community-preserving Graph Convolutions for Structural and Functional Joint Embedding of Brain Networks

no code implementations8 Nov 2019 Jiahao Liu, Guixiang Ma, Fei Jiang, Chun-Ta Lu, Philip S. Yu, Ann B. Ragin

Specifically, we use graph convolutions to learn the structural and functional joint embedding, where the graph structure is defined with structural connectivity and node features are from the functional connectivity.

Diagnostic Functional Connectivity +1

A Planning based Framework for Essay Generation

no code implementations18 Dec 2015 Bing Qin, Duyu Tang, Xinwei Geng, Dandan Ning, Jiahao Liu, Ting Liu

Generating an article automatically with computer program is a challenging task in artificial intelligence and natural language processing.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.