Search Results for author: Zhiyuan Wu

Found 16 papers, 7 papers with code

Privacy-Preserving Training-as-a-Service for On-Device Intelligence: Concept, Architectural Scheme, and Open Problems

no code implementations16 Apr 2024 Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Bo Gao, Tianliu He, Wen Wang

On-device intelligence (ODI) enables artificial intelligence (AI) applications to run on end devices, providing real-time and customized AI services without relying on remote servers.

Federated Learning Privacy Preserving +1

LIX: Implicitly Infusing Spatial Geometric Prior Knowledge into Visual Semantic Segmentation for Autonomous Driving

no code implementations13 Mar 2024 Sicen Guo, Zhiyuan Wu, Qijun Chen, Ioannis Pitas, Rui Fan

We introduce the Learning to Infuse "X" (LIX) framework, with novel contributions in both logit distillation and feature distillation aspects.

Autonomous Driving Knowledge Distillation +1

S$^3$M-Net: Joint Learning of Semantic Segmentation and Stereo Matching for Autonomous Driving

no code implementations21 Jan 2024 Zhiyuan Wu, Yi Feng, Chuang-Wei Liu, Fisher Yu, Qijun Chen, Rui Fan

Hence, in this article, we introduce S$^3$M-Net, a novel joint learning framework developed to perform semantic segmentation and stereo matching simultaneously.

Autonomous Driving Scene Understanding +2

Logits Poisoning Attack in Federated Distillation

no code implementations8 Jan 2024 Yuhan Tang, Zhiyuan Wu, Bo Gao, Tian Wen, Yuwei Wang, Sheng Sun

Federated Distillation (FD) is a novel and promising distributed machine learning paradigm, where knowledge distillation is leveraged to facilitate a more efficient and flexible cross-device knowledge transfer in federated learning.

Federated Learning Knowledge Distillation +1

Federated Class-Incremental Learning with New-Class Augmented Self-Distillation

2 code implementations1 Jan 2024 Zhiyuan Wu, Tianliu He, Sheng Sun, Yuwei Wang, Min Liu, Bo Gao, Xuefeng Jiang

Federated Learning (FL) enables collaborative model training among participants while guaranteeing the privacy of raw data.

Class Incremental Learning Federated Learning +2

Improving Communication Efficiency of Federated Distillation via Accumulating Local Updates

1 code implementation7 Dec 2023 Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Tian Wen, Wen Wang

ALU drastically decreases the frequency of communication in federated distillation, thereby significantly reducing the communication overhead during the training process.

Federated Learning

Agglomerative Federated Learning: Empowering Larger Model Training via End-Edge-Cloud Collaboration

1 code implementation1 Dec 2023 Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Bo Gao, Quyang Pan, Tianliu He, Xuefeng Jiang

Federated Learning (FL) enables training Artificial Intelligence (AI) models over end devices without compromising their privacy.

Federated Learning

Federated Skewed Label Learning with Logits Fusion

no code implementations14 Nov 2023 Yuwei Wang, Runhan Li, Hao Tan, Xuefeng Jiang, Sheng Sun, Min Liu, Bo Gao, Zhiyuan Wu

By fusing the logits of the two models, the private weak learner can capture the variance of different data, regardless of their category.

Federated Learning

Knowledge Distillation in Federated Edge Learning: A Survey

1 code implementation14 Jan 2023 Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Xuefeng Jiang, Runhan Li, Bo Gao

The increasing demand for intelligent services and privacy protection of mobile and Internet of Things (IoT) devices motivates the wide application of Federated Edge Learning (FEL), in which devices collaboratively train on-device Machine Learning (ML) models without sharing their private data.

Knowledge Distillation

FedICT: Federated Multi-task Distillation for Multi-access Edge Computing

1 code implementation1 Jan 2023 Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Quyang Pan, Xuefeng Jiang, Bo Gao

Federated Multi-task Learning (FMTL) is proposed to train related but personalized ML models for different devices, whereas previous works suffer from excessive communication overhead during training and neglect the model heterogeneity among devices in MEC.

Edge-computing Federated Learning +2

Semi-supervised Training for Knowledge Base Graph Self-attention Networks on Link Prediction

no code implementations3 Sep 2022 Shuanglong Yao, Dechang Pi, Junfu Chen, Yufei Liu, Zhiyuan Wu

The task of link prediction aims to solve the problem of incomplete knowledge caused by the difficulty of collecting facts from the real world.

Link Prediction

Exploring the Distributed Knowledge Congruence in Proxy-data-free Federated Distillation

2 code implementations14 Apr 2022 Zhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Quyang Pan, Junbo Zhang, Zeju Li, Qingxiang Liu

Federated distillation (FD) is proposed to simultaneously address the above two problems, which exchanges knowledge between the server and clients, supporting heterogeneous local models while significantly reducing communication overhead.

Federated Learning Privacy Preserving

Meta-Learning-Based Deep Reinforcement Learning for Multiobjective Optimization Problems

1 code implementation6 May 2021 Zizhen Zhang, Zhiyuan Wu, Hang Zhang, Jiahai Wang

When these problems are extended to multiobjective ones, it becomes difficult for the existing DRL approaches to flexibly and efficiently deal with multiple subproblems determined by weight decomposition of objectives.

Combinatorial Optimization Meta-Learning +3

Spirit Distillation: A Model Compression Method with Multi-domain Knowledge Transfer

no code implementations29 Apr 2021 Zhiyuan Wu, Yu Jiang, Minghao Zhao, Chupeng Cui, Zongmin Yang, Xinhui Xue, Hong Qi

To further improve the robustness of the student, we extend SD to Enhanced Spirit Distillation (ESD) in exploiting a more comprehensive knowledge by introducing the proximity domain which is similar to the target domain for feature extraction.

General Knowledge Knowledge Distillation +2

Spirit Distillation: Precise Real-time Semantic Segmentation of Road Scenes with Insufficient Data

no code implementations25 Mar 2021 Zhiyuan Wu, Yu Jiang, Chupeng Cui, Zongmin Yang, Xinhui Xue, Hong Qi

Inspired by the ideas of Fine-tuning-based Transfer Learning (FTT) and feature-based knowledge distillation, we propose a new knowledge distillation method for cross-domain knowledge transference and efficient data-insufficient network training, named Spirit Distillation(SD), which allow the student network to mimic the teacher network to extract general features, so that a compact and accurate student network can be trained for real-time semantic segmentation of road scenes.

Autonomous Driving Few-Shot Learning +4

Activation Map Adaptation for Effective Knowledge Distillation

no code implementations26 Oct 2020 Zhiyuan Wu, Hong Qi, Yu Jiang, Minghao Zhao, Chupeng Cui, Zongmin Yang, Xinhui Xue

Model compression becomes a recent trend due to the requirement of deploying neural networks on embedded and mobile devices.

Knowledge Distillation Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.