Search Results for author: Hyesung Kim

Found 7 papers, 1 papers with code

Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup

no code implementations17 Jun 2020 Seungeun Oh, Jihong Park, Eunjeong Jeong, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD.

Federated Learning Privacy Preserving

Proxy Experience Replay: Federated Distillation for Distributed Reinforcement Learning

no code implementations13 May 2020 Han Cha, Jihong Park, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

Traditional distributed deep reinforcement learning (RL) commonly relies on exchanging the experience replay memory (RM) of each agent.

Clustering Data Augmentation +3

Multi-hop Federated Private Data Augmentation with Sample Compression

no code implementations15 Jul 2019 Eunjeong Jeong, Seungeun Oh, Jihong Park, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

On-device machine learning (ML) has brought about the accessibility to a tremendous amount of data from the users while keeping their local data private instead of storing it in a central entity.

Data Augmentation

Federated Reinforcement Distillation with Proxy Experience Memory

no code implementations15 Jul 2019 Han Cha, Jihong Park, Hyesung Kim, Seong-Lyun Kim, Mehdi Bennis

In distributed reinforcement learning, it is common to exchange the experience memory of each agent and thereby collectively train their local models.

Privacy Preserving reinforcement-learning +1

Blockchained On-Device Federated Learning

2 code implementations12 Aug 2018 Hyesung Kim, Jihong Park, Mehdi Bennis, Seong-Lyun Kim

By leveraging blockchain, this letter proposes a blockchained federated learning (BlockFL) architecture where local learning model updates are exchanged and verified.

Information Theory Networking and Internet Architecture Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.