no code implementations • 10 Dec 2024 • Rongqing Li, Changsheng Li, Yuhang Li, Hanjie Li, Yi Chen, Dongchun Ren, Ye Yuan, Guoren Wang
Trajectory prediction of agents is crucial for the safety of autonomous vehicles, whereas previous approaches usually rely on sufficiently long-observed trajectory to predict the future trajectory of the agents.
no code implementations • 8 Dec 2024 • Rongqing Li, Jiaqi Yu, Changsheng Li, Wenhan Luo, Ye Yuan, Guoren Wang
However, it is difficult to access the training dataset of the target black-box model in reality.
1 code implementation • 17 Oct 2024 • Haoran Hao, Jiaming Han, Changsheng Li, Yu-Feng Li, Xiangyu Yue
To further improve generation quality and alignment with user-specific information, we design a pipeline for data collection and create a specialized dataset for personalized training of MLLMs.
1 code implementation • 2 Oct 2024 • Xi Chen, Kaituo Feng, Changsheng Li, Xunhao Lai, Xiangyu Yue, Ye Yuan, Guoren Wang
In this way, we can preserve the low-rank constraint in the optimizer while achieving full-rank training for better performance.
no code implementations • 25 May 2024 • Kaituo Feng, Changsheng Li, Xiaolu Zhang, Jun Zhou, Ye Yuan, Guoren Wang
Chain-of-thought distillation is a powerful technique for transferring reasoning abilities from large language models (LLMs) to smaller student models.
1 code implementation • CVPR 2024 • Kaituo Feng, Changsheng Li, Dongchun Ren, Ye Yuan, Guoren Wang
However, the oversized neural networks render them impractical for deployment on resource-constrained systems, which unavoidably requires more computational time and resources during reference. To handle this, knowledge distillation offers a promising approach that compresses models by enabling a smaller student model to learn from a larger teacher model.
no code implementations • 18 Oct 2023 • Shiye Wang, Kaituo Feng, Changsheng Li, Ye Yuan, Guoren Wang
Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e. g., SGD or Adam) to learn network parameters, which makes training very time- and resource-intensive.
no code implementations • 20 Jul 2023 • Rongqing Li, Jiaqi Yu, Changsheng Li, Wenhan Luo, Ye Yuan, Guoren Wang
There is a crucial limitation: these works assume the dataset used for training the target model to be known beforehand and leverage this dataset for model attribute attack.
no code implementations • 2 Jul 2023 • Kaituo Feng, Yikun Miao, Changsheng Li, Ye Yuan, Guoren Wang
Knowledge distillation (KD) has shown to be effective to boost the performance of graph neural networks (GNNs), where the typical objective is to distill knowledge from a deeper teacher GNN into a shallower student GNN.
1 code implementation • 25 May 2023 • Shuhai Zhang, Feng Liu, Jiahao Yang, Yifan Yang, Changsheng Li, Bo Han, Mingkui Tan
Last, we propose an EPS-based adversarial detection (EPS-AD) method, in which we develop EPS-based maximum mean discrepancy (MMD) as a metric to measure the discrepancy between the test sample and natural samples.
1 code implementation • 27 Mar 2023 • Kaituo Feng, Changsheng Li, Xiaolu Zhang, Jun Zhou
This will bring two big challenges to the existing dynamic GNN methods: (i) How to dynamically propagate appropriate information in an open temporal graph, where new class nodes are often linked to old class nodes.
1 code implementation • 22 Jul 2022 • Hanjie Li, Changsheng Li, Kaituo Feng, Ye Yuan, Guoren Wang, Hongyuan Zha
By this means, we can adaptively propagate knowledge to other nodes for learning robust node embedding representations.
1 code implementation • 28 Jun 2022 • Yanjiang Yu, Puyang Zhang, Kaihao Zhang, Wenhan Luo, Changsheng Li, Ye Yuan, Guoren Wang
To this end, we propose a Face Restoration Searching Network (FRSNet) to adaptively search the suitable feature extraction architecture within our specified search space, which can directly contribute to the restoration quality.
no code implementations • 14 Jun 2022 • Kaituo Feng, Changsheng Li, Ye Yuan, Guoren Wang
Knowledge distillation (KD) has demonstrated its effectiveness to boost the performance of graph neural networks (GNNs), where its goal is to distill knowledge from a deeper teacher GNN into a shallower student GNN.
2 code implementations • 8 Jun 2022 • Puyang Zhang, Kaihao Zhang, Wenhan Luo, Changsheng Li, Guoren Wang
To address this problem, we first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512).
no code implementations • 26 Apr 2022 • Shiye Wang, Changsheng Li, Yanming Li, Ye Yuan, Guoren Wang
Inheriting the advantages from information bottleneck, SIB-MSC can learn a latent space for each view to capture common information among the latent representations of different views by removing superfluous information from the view itself while retaining sufficient information for the latent representations of other views.
no code implementations • 8 Nov 2021 • Handong Ma, Changsheng Li, Xinchu Shi, Ye Yuan, Guoren Wang
To make the learnt graph structure more stable and effective, we take into account $k$-nearest neighbor graph as a priori, and learn a relation propagation graph structure.
no code implementations • 28 Oct 2021 • Yanming Li, Changsheng Li, Shiye Wang, Ye Yuan, Guoren Wang
In this paper, we propose a new deep subspace clustering framework, motivated by the energy-based models.
no code implementations • 10 May 2021 • Xiao-Yu Zhang, Haichao Shi, Changsheng Li, Xinchu Shi
Weakly supervised action localization is a challenging task with extensive applications, which aims to identify actions and the corresponding temporal intervals with only video-level annotations available.
no code implementations • 9 May 2021 • Kaihao Zhang, Wenhan Luo, Yanjiang Yu, Wenqi Ren, Fang Zhao, Changsheng Li, Lin Ma, Wei Liu, Hongdong Li
We first use a coarse deraining network to reduce the rain streaks on the input images, and then adopt a pre-trained semantic segmentation network to extract semantic features from the coarse derained image.
no code implementations • 21 Mar 2021 • Kaihao Zhang, Rongqing Li, Yanjiang Yu, Wenhan Luo, Changsheng Li, Hongdong Li
Images captured in snowy days suffer from noticeable degradation of scene visibility, which degenerates the performance of current vision-based intelligent systems.
no code implementations • 9 Dec 2020 • Jun Wang, Shaoguo Wen, Kaixing Chen, Jianghua Yu, Xin Zhou, Peng Gao, Changsheng Li, Guotong Xie
Active learning generally involves querying the most representative samples for human labeling, which has been widely studied in many fields such as image classification and object detection.
no code implementations • 28 Jul 2020 • Changsheng Li, Handong Ma, Zhao Kang, Ye Yuan, Xiao-Yu Zhang, Guoren Wang
Unsupervised active learning has attracted increasing attention in recent years, where its goal is to select representative samples in an unsupervised setting for human annotating.
no code implementations • 27 Jul 2020 • Changsheng Li, Chong Liu, Lixin Duan, Peng Gao, Kai Zheng
In this paper, we present a novel deep metric learning method to tackle the multi-label image classification problem.
no code implementations • 27 Nov 2019 • Xiao-Yu Zhang, Changsheng Li, Haichao Shi, Xiaobin Zhu, Peng Li, Jing Dong
The point process is a solid framework to model sequential data, such as videos, by exploring the underlying relevance.
1 code implementation • 11 Mar 2019 • Zhao Kang, Yiwei Lu, Yuanzhang Su, Changsheng Li, Zenglin Xu
Data similarity is a key concept in many data-driven applications.
no code implementations • 20 Feb 2019 • Xiao-Yu Zhang, Haichao Shi, Changsheng Li, Kai Zheng, Xiaobin Zhu, Lixin Duan
Action recognition in videos has attracted a lot of attention in the past decade.
1 code implementation • 5 Jan 2017 • Weishan Dong, Ting Yuan, Kai Yang, Changsheng Li, Shilei Zhang
In this paper, we study learning generalized driving style representations from automobile GPS trip data.
2 code implementations • 13 Jul 2016 • Weishan Dong, Jian Li, Renjie Yao, Changsheng Li, Ting Yuan, Lanjun Wang
Characterizing driving styles of human drivers using vehicle sensor data, e. g., GPS, is an interesting research problem and an important real-world requirement from automotive industries.
no code implementations • 6 Apr 2016 • Changsheng Li, Junchi Yan, Fan Wei, Weishan Dong, Qingshan Liu, Hongyuan Zha
In this paper, we propose a novel multi-task learning (MTL) framework, called Self-Paced Multi-Task Learning (SPMTL).
no code implementations • 22 Mar 2016 • Changsheng Li, Fan Wei, Junchi Yan, Weishan Dong, Qingshan Liu, Xiao-Yu Zhang, Hongyuan Zha
In this paper, we propose a novel multi-label learning framework, called Multi-Label Self-Paced Learning (MLSPL), in an attempt to incorporate the self-paced learning strategy into multi-label learning regime.
no code implementations • 4 Mar 2015 • Changsheng Li, Xiangfeng Wang, Weishan Dong, Junchi Yan, Qingshan Liu, Hongyuan Zha
In particular, our method runs in one-shot without the procedure of iterative sample selection for progressive labeling.
no code implementations • 18 Dec 2014 • Changsheng Li, Fan Wei, Weishan Dong, Qingshan Liu, Xiangfeng Wang, Xin Zhang
MORES can \emph{dynamically} learn the structure of the coefficients change in each update step to facilitate the model's continuous refinement.
no code implementations • 16 Dec 2014 • Changsheng Li, Qingshan Liu, Weishan Dong, Xin Zhang, Lin Yang
In this paper, we propose a new max-margin based discriminative feature learning method.