Search Results for author: Jihong Park

Found 58 papers, 6 papers with code

Interference-Aware Emergent Random Access Protocol for Downlink LEO Satellite Networks

no code implementations4 Feb 2024 Chang-Yong Lim, Jihong Park, Jinho Choi, Ju-Hyung Lee, Daesub Oh, Heewook Kim

In this article, we propose a multi-agent deep reinforcement learning (MADRL) framework to train a multiple access protocol for downlink low earth orbit (LEO) satellite networks.


Knowledge Distillation from Language-Oriented to Emergent Communication for Multi-Agent Remote Control

no code implementations23 Jan 2024 Yongjun Kim, Sejin Seo, Jihong Park, Mehdi Bennis, Seong-Lyun Kim, Junil Choi

In this work, we compare emergent communication (EC) built upon multi-agent deep reinforcement learning (MADRL) and language-oriented semantic communication (LSC) empowered by a pre-trained large language model (LLM) using human language.

Knowledge Distillation Language Modelling +1

Graph Koopman Autoencoder for Predictive Covert Communication Against UAV Surveillance

no code implementations23 Jan 2024 Sivaram Krishnan, Jihong Park, Gregory Sherman, Benjamin Campbell, Jinho Choi

Low Probability of Detection (LPD) communication aims to obscure the very presence of radio frequency (RF) signals, going beyond just hiding the content of the communication.

Generative AI Meets Semantic Communication: Evolution and Revolution of Communication Tasks

no code implementations10 Jan 2024 Eleonora Grassucci, Jihong Park, Sergio Barbarossa, Seong-Lyun Kim, Jinho Choi, Danilo Comminiello

Disclosing generative models capabilities in semantic communication paves the way for a paradigm shift with respect to conventional communication systems, which has great potential to reduce the amount of data traffic and offers a revolutionary versatility to novel tasks and applications that were not even conceivable a few years ago.


Towards Semantic Communication Protocols for 6G: From Protocol Learning to Language-Oriented Approaches

no code implementations14 Oct 2023 Jihong Park, Seung-Woo Ko, Jinho Choi, Seong-Lyun Kim, Mehdi Bennis

neural network-oriented symbolic protocols developed by converting Level 1 MAC outputs into explicit symbols; and Level 3 MAC.

Semantics Alignment via Split Learning for Resilient Multi-User Semantic Communication

no code implementations13 Oct 2023 Jinhyuk Choi, Jihong Park, Seung-Woo Ko, Jinho Choi, Mehdi Bennis, Seong-Lyun Kim

In this method, referred to as SL with layer freezing (SLF), each encoder downloads a misaligned decoder, and locally fine-tunes a fraction of these encoder-decoder NN layers.

Language-Oriented Communication with Semantic Coding and Knowledge Distillation for Text-to-Image Generation

no code implementations20 Sep 2023 Hyelin Nam, Jihong Park, Jinho Choi, Mehdi Bennis, Seong-Lyun Kim

By integrating recent advances in large language models (LLMs) and generative models into the emerging semantic communication (SC) paradigm, in this article we put forward to a novel framework of language-oriented semantic communication (LSC).

In-Context Learning Knowledge Distillation +1

Sequential Semantic Generative Communication for Progressive Text-to-Image Generation

no code implementations8 Sep 2023 Hyelin Nam, Jihong Park, Jinho Choi, Seong-Lyun Kim

Our work is expected to pave a new road of utilizing state-of-the-art generative models to real communication systems

Sentence Text-to-Image Generation

Energy-Efficient Downlink Semantic Generative Communication with Text-to-Image Generators

no code implementations8 Jun 2023 Hyein Lee, Jihong Park, Sooyoung Kim, Jinho Choi

In this paper, we introduce a novel semantic generative communication (SGC) framework, where generative users leverage text-to-image (T2I) generators to create images locally from downloaded text prompts, while non-generative users directly download images from a base station (BS).

Image Generation Total Energy

Energy-Efficient UAV-Assisted IoT Data Collection via TSP-Based Solution Space Reduction

no code implementations2 Jun 2023 Sivaram Krishnan, Mahyar Nemati, Seng W. Loke, Jihong Park, Jinho Choi

This paper presents a wireless data collection framework that employs an unmanned aerial vehicle (UAV) to efficiently gather data from distributed IoT sensors deployed in a large area.

Total Energy Traveling Salesman Problem

A Subspace Projection Approach to Autoencoder-based Anomaly Detection

no code implementations15 Feb 2023 Jinho Choi, Jihong Park, Abhinav Japesh, Adarsh

Autoencoder (AE) is a neural network (NN) architecture that is trained to reconstruct an input at its output.

Anomaly Detection

Enabling the Wireless Metaverse via Semantic Multiverse Communication

no code implementations13 Dec 2022 Jihong Park, Jinho Choi, Seong-Lyun Kim, Mehdi Bennis

Metaverse over wireless networks is an emerging use case of the sixth generation (6G) wireless systems, posing unprecedented challenges in terms of its multi-modal data transmissions with stringent latency and reliability requirements.

Multi-agent Reinforcement Learning

Quantum Federated Learning with Entanglement Controlled Circuits and Superposition Coding

no code implementations4 Dec 2022 Won Joon Yun, Jae Pyoung Kim, Hankyul Baek, Soyi Jung, Jihong Park, Mehdi Bennis, Joongheon Kim

While witnessing the noisy intermediate-scale quantum (NISQ) era and beyond, quantum federated learning (QFL) has recently become an emerging field of study.

Federated Learning Image Classification

Differentially Private CutMix for Split Learning with Vision Transformer

no code implementations28 Oct 2022 Seungeun Oh, Jihong Park, Sihun Baek, Hyelin Nam, Praneeth Vepakomma, Ramesh Raskar, Mehdi Bennis, Seong-Lyun Kim

Split learning (SL) detours this by communicating smashed data at a cut-layer, yet suffers from data privacy leakage and large communication costs caused by high similarity between ViT' s smashed data and input data.

Federated Learning Privacy Preserving

Quantum Multi-Agent Meta Reinforcement Learning

no code implementations22 Aug 2022 Won Joon Yun, Jihong Park, Joongheon Kim

Although quantum supremacy is yet to come, there has recently been an increasing interest in identifying the potential of quantum machine learning (QML) in the looming era of practical quantum computing.

Meta-Learning Meta Reinforcement Learning +4

Slimmable Quantum Federated Learning

no code implementations20 Jul 2022 Won Joon Yun, Jae Pyoung Kim, Soyi Jung, Jihong Park, Mehdi Bennis, Joongheon Kim

Quantum federated learning (QFL) has recently received increasing attention, where quantum neural networks (QNNs) are integrated into federated learning (FL).

Federated Learning

Towards Semantic Communication Protocols: A Probabilistic Logic Perspective

no code implementations8 Jul 2022 Sejin Seo, Jihong Park, Seung-Woo Ko, Jinho Choi, Mehdi Bennis, Seong-Lyun Kim

Classical medium access control (MAC) protocols are interpretable, yet their task-agnostic control signaling messages (CMs) are ill-suited for emerging mission-critical applications.

Collision Avoidance

SlimFL: Federated Learning with Superposition Coding over Slimmable Neural Networks

no code implementations26 Mar 2022 Won Joon Yun, Yunseok Kwak, Hankyul Baek, Soyi Jung, Mingyue Ji, Mehdi Bennis, Jihong Park, Joongheon Kim

However, applying FL in practice is challenging due to the local devices' heterogeneous energy, wireless channel conditions, and non-independently and identically distributed (non-IID) data distributions.

Distributed Computing Federated Learning

Attention Based Communication and Control for Multi-UAV Path Planning

no code implementations20 Dec 2021 Hamid Shiri, Hyowoon Seo, Jihong Park, Mehdi Bennis

Inspired by the multi-head attention (MHA) mechanism in natural language processing, this letter proposes an iterative single-head attention (ISHA) mechanism for multi-UAV path planning.

Decision Making

Communication and Energy Efficient Slimmable Federated Learning via Superposition Coding and Successive Decoding

no code implementations5 Dec 2021 Hankyul Baek, Won Joon Yun, Soyi Jung, Jihong Park, Mingyue Ji, Joongheon Kim, Mehdi Bennis

To address the heterogeneous communication throughput problem, each full-width (1. 0x) SNN model and its half-width ($0. 5$x) model are superposition-coded before transmission, and successively decoded after reception as the 0. 5x or $1. 0$x model depending on the channel quality.

Federated Learning

Joint Superposition Coding and Training for Federated Learning over Multi-Width Neural Networks

no code implementations5 Dec 2021 Hankyul Baek, Won Joon Yun, Yunseok Kwak, Soyi Jung, Mingyue Ji, Mehdi Bennis, Jihong Park, Joongheon Kim

By applying SC, SlimFL exchanges the superposition of multiple width configurations that are decoded as many as possible for a given communication throughput.

Federated Learning

Learning Emergent Random Access Protocol for LEO Satellite Networks

no code implementations3 Dec 2021 Ju-Hyung Lee, Hyowoon Seo, Jihong Park, Mehdi Bennis, Young-Chai Ko

A mega-constellation of low-altitude earth orbit (LEO) satellites (SATs) are envisaged to provide a global coverage SAT network in beyond fifth-generation (5G) cellular systems.


Semantics-Native Communication with Contextual Reasoning

no code implementations12 Aug 2021 Hyowoon Seo, Jihong Park, Mehdi Bennis, Mérouane Debbah

Spurred by a huge interest in the post-Shannon communication, it has recently been shown that leveraging semantics can significantly improve the communication effectiveness across many tasks.

Attention-based Reinforcement Learning for Real-Time UAV Semantic Communication

no code implementations22 May 2021 Won Joon Yun, Byungju Lim, Soyi Jung, Young-Chai Ko, Jihong Park, Joongheon Kim, Mehdi Bennis

In this article, we study the problem of air-to-ground ultra-reliable and low-latency communication (URLLC) for a moving ground user.

Graph Attention reinforcement-learning +1

Robust Reconfigurable Intelligent Surfaces via Invariant Risk and Causal Representations

no code implementations4 May 2021 Sumudu Samarakoon, Jihong Park, Mehdi Bennis

In this paper, the problem of robust reconfigurable intelligent surface (RIS) system design under changes in data distributions is investigated.

AirMixML: Over-the-Air Data Mixup for Inherently Privacy-Preserving Edge Machine Learning

no code implementations2 May 2021 Yusuke Koda, Jihong Park, Mehdi Bennis, Praneeth Vepakomma, Ramesh Raskar

In AirMixML, multiple workers transmit analog-modulated signals of their private data samples to an edge server who trains an ML model using the received noisy-and superpositioned samples.

BIG-bench Machine Learning Data Augmentation +1

Communication-Efficient and Personalized Federated Lottery Ticket Learning

no code implementations26 Apr 2021 Sejin Seo, Seung-Woo Ko, Jihong Park, Seong-Lyun Kim, Mehdi Bennis

The lottery ticket hypothesis (LTH) claims that a deep neural network (i. e., ground network) contains a number of subnetworks (i. e., winning tickets), each of which exhibiting identically accurate inference capability as that of the ground network.

Federated Learning Multi-Task Learning

Split Learning Meets Koopman Theory for Wireless Remote Monitoring and Prediction

no code implementations16 Apr 2021 Abanoub M. Girgis, Hyowoon Seo, Jihong Park, Mehdi Bennis, Jinho Choi

Numerical results under a non-linear cart-pole environment demonstrate that the proposed split learning of a Koopman autoencoder can locally predict future states, and the prediction accuracy increases with the representation dimension and transmission power.

Robust Blockchained Federated Learning with Model Validation and Proof-of-Stake Inspired Consensus

1 code implementation9 Jan 2021 Hang Chen, Syed Ali Asif, Jihong Park, Chien-Chung Shen, Mehdi Bennis

Federated learning (FL) is a promising distributed learning solution that only exchanges model parameters without revealing raw data.

Federated Learning

Federated Knowledge Distillation

4 code implementations4 Nov 2020 Hyowoon Seo, Jihong Park, Seungeun Oh, Mehdi Bennis, Seong-Lyun Kim

The goal of this chapter is to provide a deep understanding of FD while demonstrating its communication efficiency and applicability to a variety of tasks.

Federated Learning Knowledge Distillation

Integrating LEO Satellites and Multi-UAV Reinforcement Learning for Hybrid FSO/RF Non-Terrestrial Networks

no code implementations20 Oct 2020 Ju-Hyung Lee, Jihong Park, Mehdi Bennis, Young-Chai Ko

Lastly, thanks to utilizing hybrid FSO/RF links, the proposed scheme achieves up to 62. 56x higher peak throughput and 21. 09x higher worst-case throughput than the cases utilizing either RF or FSO links, highlighting the importance of co-designing SAT-UAV associations, UAV trajectories, and hybrid FSO/RF links in beyond-5G NTNs.

Dimensionality Reduction Reinforcement Learning (RL)

When Wireless Communications Meet Computer Vision in Beyond 5G

no code implementations13 Oct 2020 Takayuki Nishio, Yusuke Koda, Jihong Park, Mehdi Bennis, Klaus Doppler

This article articulates the emerging paradigm, sitting at the confluence of computer vision and wireless communication, to enable beyond-5G/6G mission-critical applications (autonomous/remote-controlled vehicles, visuo-haptic VR, and other cyber-physical applications).

Image Reconstruction

Communication Efficient Distributed Learning with Censored, Quantized, and Generalized Group ADMM

no code implementations14 Sep 2020 Chaouki Ben Issaid, Anis Elgabli, Jihong Park, Mehdi Bennis, Mérouane Debbah

In this paper, we propose a communication-efficiently decentralized machine learning framework that solves a consensus optimization problem defined over a network of inter-connected workers.


RIS-Assisted Coverage Enhancement in Millimeter-Wave Cellular Networks

no code implementations16 Jul 2020 Mahyar Nemati, Jihong Park, Jinho Choi

The use of millimeter-wave (mmWave) bandwidth is one key enabler to achieve the high data rates in the fifth-generation (5G) cellular systems.

Harnessing Wireless Channels for Scalable and Privacy-Preserving Federated Learning

no code implementations3 Jul 2020 Anis Elgabli, Jihong Park, Chaouki Ben Issaid, Mehdi Bennis

Wireless connectivity is instrumental in enabling scalable federated learning (FL), yet wireless channels bring challenges for model training, in which channel randomness perturbs each worker's model update while multiple workers' updates incur significant interference under limited bandwidth.

Federated Learning Privacy Preserving

Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup

no code implementations17 Jun 2020 Seungeun Oh, Jihong Park, Eunjeong Jeong, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD.

Federated Learning Privacy Preserving

XOR Mixup: Privacy-Preserving Data Augmentation for One-Shot Federated Learning

no code implementations9 Jun 2020 MyungJae Shin, Chihoon Hwang, Joongheon Kim, Jihong Park, Mehdi Bennis, Seong-Lyun Kim

User-generated data distributions are often imbalanced across devices and labels, hampering the performance of federated learning (FL).

Data Augmentation Federated Learning +1

Proxy Experience Replay: Federated Distillation for Distributed Reinforcement Learning

no code implementations13 May 2020 Han Cha, Jihong Park, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

Traditional distributed deep reinforcement learning (RL) commonly relies on exchanging the experience replay memory (RM) of each agent.

Clustering Data Augmentation +3

Communication-Efficient Massive UAV Online Path Control: Federated Learning Meets Mean-Field Game Theory

no code implementations9 Mar 2020 Hamid Shiri, Jihong Park, Mehdi Bennis

Therefore, the federated learning (FL) approach which can share the model parameters of NNs at drones, is proposed with NN based MFG to satisfy the required conditions.

Federated Learning

L-FGADMM: Layer-Wise Federated Group ADMM for Communication Efficient Decentralized Deep Learning

no code implementations9 Nov 2019 Anis Elgabli, Jihong Park, Sabbir Ahmed, Mehdi Bennis

This article proposes a communication-efficient decentralized deep learning algorithm, coined layer-wise federated group ADMM (L-FGADMM).

Federated Learning Test

Remote UAV Online Path Planning via Neural Network Based Opportunistic Control

no code implementations11 Oct 2019 Hamid Shiri, Jihong Park, Mehdi Bennis

This letter proposes a neural network (NN) aided remote unmanned aerial vehicle (UAV) online control algorithm, coined oHJB.

GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning

no code implementations30 Aug 2019 Anis Elgabli, Jihong Park, Amrit S. Bedi, Mehdi Bennis, Vaneet Aggarwal

When the data is distributed across multiple servers, lowering the communication cost between the servers (or workers) while solving the distributed learning problem is an important problem and is the focus of this paper.

BIG-bench Machine Learning

Federated Reinforcement Distillation with Proxy Experience Memory

no code implementations15 Jul 2019 Han Cha, Jihong Park, Hyesung Kim, Seong-Lyun Kim, Mehdi Bennis

In distributed reinforcement learning, it is common to exchange the experience memory of each agent and thereby collectively train their local models.

Privacy Preserving reinforcement-learning +1

Multi-hop Federated Private Data Augmentation with Sample Compression

no code implementations15 Jul 2019 Eunjeong Jeong, Seungeun Oh, Jihong Park, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

On-device machine learning (ML) has brought about the accessibility to a tremendous amount of data from the users while keeping their local data private instead of storing it in a central entity.

Data Augmentation

Massive Autonomous UAV Path Planning: A Neural Network Based Mean-Field Game Theoretic Approach

no code implementations10 May 2019 Hamid Shiri, Jihong Park, Mehdi Bennis

Afterwards, each UAV can control its acceleration by locally solving two partial differential equations (PDEs), known as the Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck-Kolmogorov (FPK) equations.

Collision Avoidance

Wireless Network Intelligence at the Edge

no code implementations7 Dec 2018 Jihong Park, Sumudu Samarakoon, Mehdi Bennis, Mérouane Debbah

), requires a novel paradigm change calling for distributed, low-latency and reliable ML at the wireless network edge (referred to as edge ML).

Face Recognition Medical Diagnosis

Blockchained On-Device Federated Learning

2 code implementations12 Aug 2018 Hyesung Kim, Jihong Park, Mehdi Bennis, Seong-Lyun Kim

By leveraging blockchain, this letter proposes a blockchained federated learning (BlockFL) architecture where local learning model updates are exchanged and verified.

Information Theory Networking and Internet Architecture Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.