Search Results for author: Yuhui Xu

Found 21 papers, 11 papers with code

Separated Contrastive Learning for Matching in Cross-domain Recommendation with Curriculum Scheduling

no code implementations22 Feb 2025 Heng Chang, Liang Gu, Cheng Hu, Zhinan Zhang, Hong Zhu, Yuhui Xu, Yuan Fang, Zhen Chen

Cross-domain recommendation (CDR) is a task that aims to improve the recommendation performance in a target domain by leveraging the information from source domains.

Contrastive Learning Recommendation Systems +3

Reward Models Identify Consistency, Not Causality

no code implementations20 Feb 2025 Yuhui Xu, Hanze Dong, Lei Wang, Caiming Xiong, Junnan Li

Reward models (RMs) play a crucial role in aligning large language models (LLMs) with human preferences and enhancing reasoning quality.

Reward-Guided Speculative Decoding for Efficient LLM Reasoning

no code implementations31 Jan 2025 Baohao Liao, Yuhui Xu, Hanze Dong, Junnan Li, Christof Monz, Silvio Savarese, Doyen Sahoo, Caiming Xiong

We introduce Reward-Guided Speculative Decoding (RSD), a novel framework aimed at improving the efficiency of inference in large language models (LLMs).

GaLore$+$: Boosting Low-Rank Adaptation for LLMs with Cross-Head Projection

no code implementations15 Dec 2024 Xutao Liao, Shaohui Li, Yuhui Xu, Zhi Li, Yu Liu, You He

To further enhance performance, we propose sparsely coded residuals to reduce the errors caused by low-rank approximation on the first- and second-order moments of the optimizers and weight updates.

Arithmetic Reasoning Text Generation

MathHay: An Automated Benchmark for Long-Context Mathematical Reasoning in LLMs

no code implementations7 Oct 2024 Lei Wang, Shan Dong, Yuhui Xu, Hanze Dong, Yalu Wang, Amrita Saha, Ee-Peng Lim, Caiming Xiong, Doyen Sahoo

Although some recent benchmarks have been developed to evaluate the long-context capabilities of LLMs, there is a lack of benchmarks evaluating the mathematical reasoning abilities of LLMs over long contexts, which is crucial for LLMs' application in real-world scenarios.

Information Retrieval Mathematical Reasoning

ThinK: Thinner Key Cache by Query-Driven Pruning

no code implementations30 Jul 2024 Yuhui Xu, Zhanming Jie, Hanze Dong, Lei Wang, Xudong Lu, Aojun Zhou, Amrita Saha, Caiming Xiong, Doyen Sahoo

Large Language Models (LLMs) have revolutionized the field of natural language processing, achieving unprecedented performance across a variety of applications.

Quantization

SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models

1 code implementation25 May 2024 Xudong Lu, Aojun Zhou, Yuhui Xu, Renrui Zhang, Peng Gao, Hongsheng Li

Large Language Models (LLMs) have become pivotal in advancing the field of artificial intelligence, yet their immense sizes pose significant challenges for both fine-tuning and deployment.

parameter-efficient fine-tuning

TerDiT: Ternary Diffusion Models with Transformers

1 code implementation23 May 2024 Xudong Lu, Aojun Zhou, Ziyi Lin, Qi Liu, Yuhui Xu, Renrui Zhang, Yafei Wen, Shuai Ren, Peng Gao, Junchi Yan, Hongsheng Li

Recent developments in large-scale pre-trained text-to-image diffusion models have significantly improved the generation of high-fidelity images, particularly with the emergence of diffusion models based on transformer architecture (DiTs).

Image Generation Quantization

Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models

1 code implementation22 Feb 2024 Xudong Lu, Qi Liu, Yuhui Xu, Aojun Zhou, Siyuan Huang, Bo Zhang, Junchi Yan, Hongsheng Li

Specifically, we propose, for the first time to our best knowledge, post-training approaches for task-agnostic and task-specific expert pruning and skipping of MoE LLMs, tailored to improve deployment efficiency while maintaining model performance across a wide range of tasks.

All

Batch Normalization with Enhanced Linear Transformation

1 code implementation28 Nov 2020 Yuhui Xu, Lingxi Xie, Cihang Xie, Jieru Mei, Siyuan Qiao, Wei Shen, Hongkai Xiong, Alan Yuille

Batch normalization (BN) is a fundamental unit in modern deep networks, in which a linear transformation module was designed for improving BN's flexibility of fitting complex data distributions.

TRP: Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation30 Apr 2020 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition.

Fitting the Search Space of Weight-sharing NAS with Graph Convolutional Networks

no code implementations17 Apr 2020 Xin Chen, Lingxi Xie, Jun Wu, Longhui Wei, Yuhui Xu, Qi Tian

We alleviate this issue by training a graph convolutional network to fit the performance of sampled sub-networks so that the impact of random errors becomes minimal.

Neural Architecture Search

Latency-Aware Differentiable Neural Architecture Search

1 code implementation17 Jan 2020 Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Bowen Shi, Qi Tian, Hongkai Xiong

However, these methods suffer the difficulty in optimizing network, so that the searched network is often unfriendly to hardware.

Neural Architecture Search

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation9 Oct 2019 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.

PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search

8 code implementations ICLR 2020 Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, Hongkai Xiong

Differentiable architecture search (DARTS) provided a fast solution in finding effective network architectures, but suffered from large memory and computing overheads in jointly training a super-network and searching for an optimal architecture.

Neural Architecture Search

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation6 Dec 2018 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

We propose Trained Rank Pruning (TRP), which iterates low rank approximation and training.

Quantization

DNQ: Dynamic Network Quantization

no code implementations6 Dec 2018 Yuhui Xu, Shuai Zhang, Yingyong Qi, Jiaxian Guo, Weiyao Lin, Hongkai Xiong

Network quantization is an effective method for the deployment of neural networks on memory and energy constrained mobile devices.

Quantization

Deep Neural Network Compression with Single and Multiple Level Quantization

1 code implementation6 Mar 2018 Yuhui Xu, Yongzhuang Wang, Aojun Zhou, Weiyao Lin, Hongkai Xiong

In this paper, we propose two novel network quantization approaches, single-level network quantization (SLQ) for high-bit quantization and multi-level network quantization (MLQ) for extremely low-bit quantization (ternary). We are the first to consider the network quantization from both width and depth level.

Neural Network Compression Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.