Search Results for author: Masahiro Morikura

Found 8 papers, 0 papers with code

Decentralized and Model-Free Federated Learning: Consensus-Based Distillation in Function Space

no code implementations1 Apr 2021 Akihito Taya, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto

Because FL algorithms hardly converge the parameters of machine learning (ML) models, this paper focuses on the convergence of ML models in function spaces.

Federated Learning Knowledge Distillation

Zero-Shot Adaptation for mmWave Beam-Tracking on Overhead Messenger Wires through Robust Adversarial Reinforcement Learning

no code implementations16 Feb 2021 Masao Shinzaki, Yusuke Koda, Koji Yamamoto, Takayuki Nishio, Masahiro Morikura, Yushi Shirato, Daisei Uchida, Naoki Kita

Second, we demonstrate the feasibility of \textit{zero-shot adaptation} as a solution, where a learning agent adapts to environmental parameters unseen during training.

MAB-based Client Selection for Federated Learning with Uncertain Resources in Mobile Networks

no code implementations29 Sep 2020 Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto

This paper proposes a multi-armed bandit (MAB)-based client selection method to solve the exploration and exploitation trade-off and reduce the time consumption for FL in mobile networks.

Networking and Internet Architecture

Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training with Non-IID Private Data

no code implementations14 Aug 2020 Sohei Itahara, Takayuki Nishio, Yusuke Koda, Masahiro Morikura, Koji Yamamoto

To this end, based on the idea of leveraging an unlabeled open dataset, we propose a distillation-based semi-supervised FL (DS-FL) algorithm that exchanges the outputs of local models among mobile devices, instead of model parameter exchange employed by the typical frameworks.

Data Augmentation Federated Learning

Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning

no code implementations21 Apr 2020 Sohei Itahara, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto

The key idea of the proposed method is to obtain a ``good'' subnetwork from the original NN using the unlabeled data based on the lottery hypothesis.

Denoising Federated Learning +3

Differentially Private AirComp Federated Learning with Power Adaptation Harnessing Receiver Noise

no code implementations14 Apr 2020 Yusuke Koda, Koji Yamamoto, Takayuki Nishio, Masahiro Morikura

To this end, a differentially private AirComp-based FL is designed in this study, where the key idea is to harness receiver noise perturbation injected to aggregated global models inherently, thereby preventing the inference of clients' private data.

Networking and Internet Architecture Signal Processing

Hybrid-FL for Wireless Networks: Cooperative Learning Mechanism Using Non-IID Data

no code implementations17 May 2019 Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto, Ryo Yonetani

Therefore, to mitigate the degradation induced by non-IID data, we assume that a limited number (e. g., less than 1%) of clients allow their data to be uploaded to a server, and we propose a hybrid learning mechanism referred to as Hybrid-FL, wherein the server updates the model using the data gathered from the clients and aggregates the model with the models trained by clients.

Federated Learning

Deep Reinforcement Learning-Based Channel Allocation for Wireless LANs with Graph Convolutional Networks

no code implementations17 May 2019 Kota Nakashima, Shotaro Kamiya, Kazuki Ohtsu, Koji Yamamoto, Takayuki Nishio, Masahiro Morikura

In densely deployed WLANs, the number of the available topologies of APs is extremely high, and thus we extract the features of the topological structures based on GCNs.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.