Search Results for author: Zhiyuan Wu

Found 6 papers, 1 papers with code

Semi-supervised Training for Knowledge Base Graph Self-attention Networks on Link Prediction

no code implementations3 Sep 2022 Shuanglong Yao, Dechang Pi, Junfu Chen, Yufei Liu, Zhiyuan Wu

The task of link prediction aims to solve the problem of incomplete knowledge caused by the difficulty of collecting facts from the real world.

Link Prediction

Exploring the Distributed Knowledge Congruence in Proxy-data-free Federated Distillation

no code implementations14 Apr 2022 Zhiyuan Wu, Sheng Sun, Min Liu, Junbo Zhang, Yuwei Wang, Qingxiang Liu

Federated learning (FL) is a privacy-preserving machine learning paradigm in which the server periodically aggregates local model parameters from clients without assembling their private data.

Federated Learning Privacy Preserving

Meta-Learning-Based Deep Reinforcement Learning for Multiobjective Optimization Problems

1 code implementation6 May 2021 Zizhen Zhang, Zhiyuan Wu, Hang Zhang, Jiahai Wang

When these problems are extended to multiobjective ones, it becomes difficult for the existing DRL approaches to flexibly and efficiently deal with multiple subproblems determined by weight decomposition of objectives.

Combinatorial Optimization Meta-Learning +2

Spirit Distillation: A Model Compression Method with Multi-domain Knowledge Transfer

no code implementations29 Apr 2021 Zhiyuan Wu, Yu Jiang, Minghao Zhao, Chupeng Cui, Zongmin Yang, Xinhui Xue, Hong Qi

To further improve the robustness of the student, we extend SD to Enhanced Spirit Distillation (ESD) in exploiting a more comprehensive knowledge by introducing the proximity domain which is similar to the target domain for feature extraction.

General Knowledge Knowledge Distillation +2

Spirit Distillation: Precise Real-time Semantic Segmentation of Road Scenes with Insufficient Data

no code implementations25 Mar 2021 Zhiyuan Wu, Yu Jiang, Chupeng Cui, Zongmin Yang, Xinhui Xue, Hong Qi

Inspired by the ideas of Fine-tuning-based Transfer Learning (FTT) and feature-based knowledge distillation, we propose a new knowledge distillation method for cross-domain knowledge transference and efficient data-insufficient network training, named Spirit Distillation(SD), which allow the student network to mimic the teacher network to extract general features, so that a compact and accurate student network can be trained for real-time semantic segmentation of road scenes.

Autonomous Driving Few-Shot Learning +3

Activation Map Adaptation for Effective Knowledge Distillation

no code implementations26 Oct 2020 Zhiyuan Wu, Hong Qi, Yu Jiang, Minghao Zhao, Chupeng Cui, Zongmin Yang, Xinhui Xue

Model compression becomes a recent trend due to the requirement of deploying neural networks on embedded and mobile devices.

Knowledge Distillation Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.