Search Results for author: Changsheng Li

Found 34 papers, 11 papers with code

ITPNet: Towards Instantaneous Trajectory Prediction for Autonomous Driving

no code implementations10 Dec 2024 Rongqing Li, Changsheng Li, Yuhang Li, Hanjie Li, Yi Chen, Dongchun Ren, Ye Yuan, Guoren Wang

Trajectory prediction of agents is crucial for the safety of autonomous vehicles, whereas previous approaches usually rely on sufficiently long-observed trajectory to predict the future trajectory of the agents.

Autonomous Driving Trajectory Prediction

DREAM: Domain-agnostic Reverse Engineering Attributes of Black-box Model

no code implementations8 Dec 2024 Rongqing Li, Jiaqi Yu, Changsheng Li, Wenhan Luo, Ye Yuan, Guoren Wang

However, it is difficult to access the training dataset of the target black-box model in reality.

Attribute

Retrieval-Augmented Personalization for Multimodal Large Language Models

1 code implementation17 Oct 2024 Haoran Hao, Jiaming Han, Changsheng Li, Yu-Feng Li, Xiangyu Yue

To further improve generation quality and alignment with user-specific information, we design a pipeline for data collection and create a specialized dataset for personalized training of MLLMs.

Image Captioning Question Answering +1

Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?

1 code implementation2 Oct 2024 Xi Chen, Kaituo Feng, Changsheng Li, Xunhao Lai, Xiangyu Yue, Ye Yuan, Guoren Wang

In this way, we can preserve the low-rank constraint in the optimizer while achieving full-rank training for better performance.

Keypoint-based Progressive Chain-of-Thought Distillation for LLMs

no code implementations25 May 2024 Kaituo Feng, Changsheng Li, Xiaolu Zhang, Jun Zhou, Ye Yuan, Guoren Wang

Chain-of-thought distillation is a powerful technique for transferring reasoning abilities from large language models (LLMs) to smaller student models.

On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving

1 code implementation CVPR 2024 Kaituo Feng, Changsheng Li, Dongchun Ren, Ye Yuan, Guoren Wang

However, the oversized neural networks render them impractical for deployment on resource-constrained systems, which unavoidably requires more computational time and resources during reference. To handle this, knowledge distillation offers a promising approach that compresses models by enabling a smaller student model to learn from a larger teacher model.

Autonomous Driving Knowledge Distillation +1

Learning to Generate Parameters of ConvNets for Unseen Image Data

no code implementations18 Oct 2023 Shiye Wang, Kaituo Feng, Changsheng Li, Ye Yuan, Guoren Wang

Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e. g., SGD or Adam) to learn network parameters, which makes training very time- and resource-intensive.

DREAM: Domain-free Reverse Engineering Attributes of Black-box Model

no code implementations20 Jul 2023 Rongqing Li, Jiaqi Yu, Changsheng Li, Wenhan Luo, Ye Yuan, Guoren Wang

There is a crucial limitation: these works assume the dataset used for training the target model to be known beforehand and leverage this dataset for model attribute attack.

Attribute

Shared Growth of Graph Neural Networks via Prompted Free-direction Knowledge Distillation

no code implementations2 Jul 2023 Kaituo Feng, Yikun Miao, Changsheng Li, Ye Yuan, Guoren Wang

Knowledge distillation (KD) has shown to be effective to boost the performance of graph neural networks (GNNs), where the typical objective is to distill knowledge from a deeper teacher GNN into a shallower student GNN.

Knowledge Distillation Transfer Learning

Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score

1 code implementation25 May 2023 Shuhai Zhang, Feng Liu, Jiahao Yang, Yifan Yang, Changsheng Li, Bo Han, Mingkui Tan

Last, we propose an EPS-based adversarial detection (EPS-AD) method, in which we develop EPS-based maximum mean discrepancy (MMD) as a metric to measure the discrepancy between the test sample and natural samples.

Towards Open Temporal Graph Neural Networks

1 code implementation27 Mar 2023 Kaituo Feng, Changsheng Li, Xiaolu Zhang, Jun Zhou

This will bring two big challenges to the existing dynamic GNN methods: (i) How to dynamically propagate appropriate information in an open temporal graph, where new class nodes are often linked to old class nodes.

class-incremental learning Class Incremental Learning +1

Robust Knowledge Adaptation for Dynamic Graph Neural Networks

1 code implementation22 Jul 2022 Hanjie Li, Changsheng Li, Kaituo Feng, Ye Yuan, Guoren Wang, Hongyuan Zha

By this means, we can adaptively propagate knowledge to other nodes for learning robust node embedding representations.

reinforcement-learning Reinforcement Learning +1

Multi-Prior Learning via Neural Architecture Search for Blind Face Restoration

1 code implementation28 Jun 2022 Yanjiang Yu, Puyang Zhang, Kaihao Zhang, Wenhan Luo, Changsheng Li, Ye Yuan, Guoren Wang

To this end, we propose a Face Restoration Searching Network (FRSNet) to adaptively search the suitable feature extraction architecture within our specified search space, which can directly contribute to the restoration quality.

Blind Face Restoration Neural Architecture Search

FreeKD: Free-direction Knowledge Distillation for Graph Neural Networks

no code implementations14 Jun 2022 Kaituo Feng, Changsheng Li, Ye Yuan, Guoren Wang

Knowledge distillation (KD) has demonstrated its effectiveness to boost the performance of graph neural networks (GNNs), where its goal is to distill knowledge from a deeper teacher GNN into a shallower student GNN.

Knowledge Distillation reinforcement-learning +2

Blind Face Restoration: Benchmark Datasets and a Baseline Model

2 code implementations8 Jun 2022 Puyang Zhang, Kaihao Zhang, Wenhan Luo, Changsheng Li, Guoren Wang

To address this problem, we first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512).

Blind Face Restoration

Self-Supervised Information Bottleneck for Deep Multi-View Subspace Clustering

no code implementations26 Apr 2022 Shiye Wang, Changsheng Li, Yanming Li, Ye Yuan, Guoren Wang

Inheriting the advantages from information bottleneck, SIB-MSC can learn a latent space for each view to capture common information among the latent representations of different views by removing superfluous information from the view itself while retaining sufficient information for the latent representations of other views.

Clustering Multi-view Subspace Clustering

Deep Unsupervised Active Learning on Learnable Graphs

no code implementations8 Nov 2021 Handong Ma, Changsheng Li, Xinchu Shi, Ye Yuan, Guoren Wang

To make the learnt graph structure more stable and effective, we take into account $k$-nearest neighbor graph as a priori, and learn a relation propagation graph structure.

Active Learning Graph structure learning +2

Action Shuffling for Weakly Supervised Temporal Localization

no code implementations10 May 2021 Xiao-Yu Zhang, Haichao Shi, Changsheng Li, Xinchu Shi

Weakly supervised action localization is a challenging task with extensive applications, which aims to identify actions and the corresponding temporal intervals with only video-level annotations available.

Action Localization Temporal Localization +1

Beyond Monocular Deraining: Parallel Stereo Deraining Network Via Semantic Prior

no code implementations9 May 2021 Kaihao Zhang, Wenhan Luo, Yanjiang Yu, Wenqi Ren, Fang Zhao, Changsheng Li, Lin Ma, Wei Liu, Hongdong Li

We first use a coarse deraining network to reduce the rain streaks on the input images, and then adopt a pre-trained semantic segmentation network to extract semantic features from the coarse derained image.

Benchmarking Rain Removal +1

Deep Dense Multi-scale Network for Snow Removal Using Semantic and Geometric Priors

no code implementations21 Mar 2021 Kaihao Zhang, Rongqing Li, Yanjiang Yu, Wenhan Luo, Changsheng Li, Hongdong Li

Images captured in snowy days suffer from noticeable degradation of scene visibility, which degenerates the performance of current vision-based intelligent systems.

Image Restoration Snow Removal

Semi-supervised Active Learning for Instance Segmentation via Scoring Predictions

no code implementations9 Dec 2020 Jun Wang, Shaoguo Wen, Kaixing Chen, Jianghua Yu, Xin Zhou, Peng Gao, Changsheng Li, Guotong Xie

Active learning generally involves querying the most representative samples for human labeling, which has been widely studied in many fields such as image classification and object detection.

Active Learning Image Classification +6

On Deep Unsupervised Active Learning

no code implementations28 Jul 2020 Changsheng Li, Handong Ma, Zhao Kang, Ye Yuan, Xiao-Yu Zhang, Guoren Wang

Unsupervised active learning has attracted increasing attention in recent years, where its goal is to select representative samples in an unsupervised setting for human annotating.

Active Learning Decoder

Reconstruction Regularized Deep Metric Learning for Multi-label Image Classification

no code implementations27 Jul 2020 Changsheng Li, Chong Liu, Lixin Duan, Peng Gao, Kai Zheng

In this paper, we present a novel deep metric learning method to tackle the multi-label image classification problem.

General Classification Metric Learning +1

Characterizing Driving Styles with Deep Learning

2 code implementations13 Jul 2016 Weishan Dong, Jian Li, Renjie Yao, Changsheng Li, Ting Yuan, Lanjun Wang

Characterizing driving styles of human drivers using vehicle sensor data, e. g., GPS, is an interesting research problem and an important real-world requirement from automotive industries.

Autonomous Driving Deep Learning +1

Self-Paced Multi-Task Learning

no code implementations6 Apr 2016 Changsheng Li, Junchi Yan, Fan Wei, Weishan Dong, Qingshan Liu, Hongyuan Zha

In this paper, we propose a novel multi-task learning (MTL) framework, called Self-Paced Multi-Task Learning (SPMTL).

Multi-Task Learning

A Self-Paced Regularization Framework for Multi-Label Learning

no code implementations22 Mar 2016 Changsheng Li, Fan Wei, Junchi Yan, Weishan Dong, Qingshan Liu, Xiao-Yu Zhang, Hongyuan Zha

In this paper, we propose a novel multi-label learning framework, called Multi-Label Self-Paced Learning (MLSPL), in an attempt to incorporate the self-paced learning strategy into multi-label learning regime.

Multi-Label Learning

Joint Active Learning with Feature Selection via CUR Matrix Decomposition

no code implementations4 Mar 2015 Changsheng Li, Xiangfeng Wang, Weishan Dong, Junchi Yan, Qingshan Liu, Hongyuan Zha

In particular, our method runs in one-shot without the procedure of iterative sample selection for progressive labeling.

Active Learning feature selection

Dynamic Structure Embedded Online Multiple-Output Regression for Stream Data

no code implementations18 Dec 2014 Changsheng Li, Fan Wei, Weishan Dong, Qingshan Liu, Xiangfeng Wang, Xin Zhang

MORES can \emph{dynamically} learn the structure of the coefficients change in each update step to facilitate the model's continuous refinement.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.