Search Results for author: Kaituo Feng

Found 6 papers, 3 papers with code

On the Road to Portability: Compressing End-to-End Motion Planner for Autonomous Driving

1 code implementation2 Mar 2024 Kaituo Feng, Changsheng Li, Dongchun Ren, Ye Yuan, Guoren Wang

However, the oversized neural networks render them impractical for deployment on resource-constrained systems, which unavoidably requires more computational time and resources during reference. To handle this, knowledge distillation offers a promising approach that compresses models by enabling a smaller student model to learn from a larger teacher model.

Autonomous Driving Knowledge Distillation +1

Learning to Generate Parameters of ConvNets for Unseen Image Data

no code implementations18 Oct 2023 Shiye Wang, Kaituo Feng, Changsheng Li, Ye Yuan, Guoren Wang

Typical Convolutional Neural Networks (ConvNets) depend heavily on large amounts of image data and resort to an iterative optimization algorithm (e. g., SGD or Adam) to learn network parameters, which makes training very time- and resource-intensive.

Shared Growth of Graph Neural Networks via Prompted Free-direction Knowledge Distillation

no code implementations2 Jul 2023 Kaituo Feng, Yikun Miao, Changsheng Li, Ye Yuan, Guoren Wang

Knowledge distillation (KD) has shown to be effective to boost the performance of graph neural networks (GNNs), where the typical objective is to distill knowledge from a deeper teacher GNN into a shallower student GNN.

Knowledge Distillation Transfer Learning

Towards Open Temporal Graph Neural Networks

1 code implementation27 Mar 2023 Kaituo Feng, Changsheng Li, Xiaolu Zhang, Jun Zhou

This will bring two big challenges to the existing dynamic GNN methods: (i) How to dynamically propagate appropriate information in an open temporal graph, where new class nodes are often linked to old class nodes.

Class Incremental Learning Incremental Learning

Robust Knowledge Adaptation for Dynamic Graph Neural Networks

1 code implementation22 Jul 2022 Hanjie Li, Changsheng Li, Kaituo Feng, Ye Yuan, Guoren Wang, Hongyuan Zha

By this means, we can adaptively propagate knowledge to other nodes for learning robust node embedding representations.

reinforcement-learning Reinforcement Learning (RL)

FreeKD: Free-direction Knowledge Distillation for Graph Neural Networks

no code implementations14 Jun 2022 Kaituo Feng, Changsheng Li, Ye Yuan, Guoren Wang

Knowledge distillation (KD) has demonstrated its effectiveness to boost the performance of graph neural networks (GNNs), where its goal is to distill knowledge from a deeper teacher GNN into a shallower student GNN.

Knowledge Distillation reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.