Search Results for author: Koji Yamamoto

Found 14 papers, 1 papers with code

Communication-oriented Model Fine-tuning for Packet-loss Resilient Distributed Inference under Highly Lossy IoT Networks

no code implementations17 Dec 2021 Sohei Itahara, Takayuki Nishio, Yusuke Koda, Koji Yamamoto

However, generally, there is a communication system-level trade-off between communication latency and reliability; thus, to provide accurate DI results, a reliable and high-latency communication system is required to be adapted, which results in non-negligible end-to-end latency of the DI.

Frame-Capture-Based CSI Recomposition Pertaining to Firmware-Agnostic WiFi Sensing

no code implementations29 Oct 2021 Ryosuke Hanahara, Sohei Itahara, Kota Yamashita, Yusuke Koda, Akihito Taya, Takayuki Nishio, Koji Yamamoto

This indicates that WiFi sensing that leverages the BFM matrix is more practical to implement using the pre-installed APs.

Beamforming Feedback-based Model-Driven Angle of Departure Estimation Toward Legacy Support in WiFi Sensing: An Experimental Study

no code implementations27 Oct 2021 Sohei Itahara, Sota Kondo, Kota Yamashita, Takayuki Nishio, Koji Yamamoto, Yusuke Koda

Moreover, the evaluations performed in this study revealed that the BFF-based MUSIC achieved a comparable error of AoD estimation to the CSI-based MUSIC, while BFF is a highly compressed version of CSI in IEEE 802. 11ac/ax.

Packet-Loss-Tolerant Split Inference for Delay-Sensitive Deep Learning in Lossy Wireless Networks

no code implementations28 Apr 2021 Sohei Itahara, Takayuki Nishio, Koji Yamamoto

This study solves the problem of the incremental retransmission latency caused by packet loss in a lossy IoT network.

Decentralized and Model-Free Federated Learning: Consensus-Based Distillation in Function Space

no code implementations1 Apr 2021 Akihito Taya, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto

Because FL algorithms hardly converge the parameters of machine learning (ML) models, this paper focuses on the convergence of ML models in function spaces.

Federated Learning Knowledge Distillation

Zero-Shot Adaptation for mmWave Beam-Tracking on Overhead Messenger Wires through Robust Adversarial Reinforcement Learning

no code implementations16 Feb 2021 Masao Shinzaki, Yusuke Koda, Koji Yamamoto, Takayuki Nishio, Masahiro Morikura, Yushi Shirato, Daisei Uchida, Naoki Kita

Second, we demonstrate the feasibility of \textit{zero-shot adaptation} as a solution, where a learning agent adapts to environmental parameters unseen during training.

MAB-based Client Selection for Federated Learning with Uncertain Resources in Mobile Networks

no code implementations29 Sep 2020 Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto

This paper proposes a multi-armed bandit (MAB)-based client selection method to solve the exploration and exploitation trade-off and reduce the time consumption for FL in mobile networks.

Networking and Internet Architecture

Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training with Non-IID Private Data

no code implementations14 Aug 2020 Sohei Itahara, Takayuki Nishio, Yusuke Koda, Masahiro Morikura, Koji Yamamoto

To this end, based on the idea of leveraging an unlabeled open dataset, we propose a distillation-based semi-supervised FL (DS-FL) algorithm that exchanges the outputs of local models among mobile devices, instead of model parameter exchange employed by the typical frameworks.

Data Augmentation Federated Learning

Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning

no code implementations21 Apr 2020 Sohei Itahara, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto

The key idea of the proposed method is to obtain a ``good'' subnetwork from the original NN using the unlabeled data based on the lottery hypothesis.

Denoising Federated Learning +3

Differentially Private AirComp Federated Learning with Power Adaptation Harnessing Receiver Noise

no code implementations14 Apr 2020 Yusuke Koda, Koji Yamamoto, Takayuki Nishio, Masahiro Morikura

To this end, a differentially private AirComp-based FL is designed in this study, where the key idea is to harness receiver noise perturbation injected to aggregated global models inherently, thereby preventing the inference of clients' private data.

Networking and Internet Architecture Signal Processing

Hybrid-FL for Wireless Networks: Cooperative Learning Mechanism Using Non-IID Data

no code implementations17 May 2019 Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto, Ryo Yonetani

Therefore, to mitigate the degradation induced by non-IID data, we assume that a limited number (e. g., less than 1%) of clients allow their data to be uploaded to a server, and we propose a hybrid learning mechanism referred to as Hybrid-FL, wherein the server updates the model using the data gathered from the clients and aggregates the model with the models trained by clients.

Federated Learning

Deep Reinforcement Learning-Based Channel Allocation for Wireless LANs with Graph Convolutional Networks

no code implementations17 May 2019 Kota Nakashima, Shotaro Kamiya, Kazuki Ohtsu, Koji Yamamoto, Takayuki Nishio, Masahiro Morikura

In densely deployed WLANs, the number of the available topologies of APs is extremely high, and thus we extract the features of the topological structures based on GCNs.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.