Search Results for author: Seong-Lyun Kim

Found 27 papers, 2 papers with code

Energy-Efficient Edge Learning via Joint Data Deepening-and-Prefetching

no code implementations19 Feb 2024 Sujin Kook, Won-Yong Shin, Seong-Lyun Kim, Seung-Woo Ko

The vision of pervasive artificial intelligence (AI) services can be realized by training an AI model on time using real-time data collected by internet of things (IoT) devices.

Knowledge Distillation from Language-Oriented to Emergent Communication for Multi-Agent Remote Control

no code implementations23 Jan 2024 Yongjun Kim, Sejin Seo, Jihong Park, Mehdi Bennis, Seong-Lyun Kim, Junil Choi

In this work, we compare emergent communication (EC) built upon multi-agent deep reinforcement learning (MADRL) and language-oriented semantic communication (LSC) empowered by a pre-trained large language model (LLM) using human language.

Knowledge Distillation Language Modelling +1

Generative AI Meets Semantic Communication: Evolution and Revolution of Communication Tasks

no code implementations10 Jan 2024 Eleonora Grassucci, Jihong Park, Sergio Barbarossa, Seong-Lyun Kim, Jinho Choi, Danilo Comminiello

Disclosing generative models capabilities in semantic communication paves the way for a paradigm shift with respect to conventional communication systems, which has great potential to reduce the amount of data traffic and offers a revolutionary versatility to novel tasks and applications that were not even conceivable a few years ago.

Denoising

Mobility-Induced Graph Learning for WiFi Positioning

no code implementations14 Nov 2023 Kyuwon Han, Seung Min Yu, Seong-Lyun Kim, Seung-Woo Ko

A smartphone-based user mobility tracking could be effective in finding his/her location, while the unpredictable error therein due to low specification of built-in inertial measurement units (IMUs) rejects its standalone usage but demands the integration to another positioning technique like WiFi positioning.

Graph Learning Self-Supervised Learning

Towards Semantic Communication Protocols for 6G: From Protocol Learning to Language-Oriented Approaches

no code implementations14 Oct 2023 Jihong Park, Seung-Woo Ko, Jinho Choi, Seong-Lyun Kim, Mehdi Bennis

neural network-oriented symbolic protocols developed by converting Level 1 MAC outputs into explicit symbols; and Level 3 MAC.

Semantics Alignment via Split Learning for Resilient Multi-User Semantic Communication

no code implementations13 Oct 2023 Jinhyuk Choi, Jihong Park, Seung-Woo Ko, Jinho Choi, Mehdi Bennis, Seong-Lyun Kim

In this method, referred to as SL with layer freezing (SLF), each encoder downloads a misaligned decoder, and locally fine-tunes a fraction of these encoder-decoder NN layers.

Language-Oriented Communication with Semantic Coding and Knowledge Distillation for Text-to-Image Generation

no code implementations20 Sep 2023 Hyelin Nam, Jihong Park, Jinho Choi, Mehdi Bennis, Seong-Lyun Kim

By integrating recent advances in large language models (LLMs) and generative models into the emerging semantic communication (SC) paradigm, in this article we put forward to a novel framework of language-oriented semantic communication (LSC).

In-Context Learning Knowledge Distillation +1

Sequential Semantic Generative Communication for Progressive Text-to-Image Generation

no code implementations8 Sep 2023 Hyelin Nam, Jihong Park, Jinho Choi, Seong-Lyun Kim

Our work is expected to pave a new road of utilizing state-of-the-art generative models to real communication systems

Sentence Text-to-Image Generation

SplitAMC: Split Learning for Robust Automatic Modulation Classification

no code implementations17 Apr 2023 Jihoon Park, Seungeun Oh, Seong-Lyun Kim

Automatic modulation classification (AMC) is a technology that identifies a modulation scheme without prior signal information and plays a vital role in various applications, including cognitive radio and link adaptation.

Classification Federated Learning

Enabling the Wireless Metaverse via Semantic Multiverse Communication

no code implementations13 Dec 2022 Jihong Park, Jinho Choi, Seong-Lyun Kim, Mehdi Bennis

Metaverse over wireless networks is an emerging use case of the sixth generation (6G) wireless systems, posing unprecedented challenges in terms of its multi-modal data transmissions with stringent latency and reliability requirements.

Multi-agent Reinforcement Learning

Enabling AI Quality Control via Feature Hierarchical Edge Inference

no code implementations15 Nov 2022 Jinhyuk Choi, Seong-Lyun Kim, Seung-Woo Ko

Specifically, feature network is designed based on feature hierarchy, a one-directional feature dependency with a different scale.

Edge-computing

Differentially Private CutMix for Split Learning with Vision Transformer

no code implementations28 Oct 2022 Seungeun Oh, Jihong Park, Sihun Baek, Hyelin Nam, Praneeth Vepakomma, Ramesh Raskar, Mehdi Bennis, Seong-Lyun Kim

Split learning (SL) detours this by communicating smashed data at a cut-layer, yet suffers from data privacy leakage and large communication costs caused by high similarity between ViT' s smashed data and input data.

Federated Learning Privacy Preserving

Towards Semantic Communication Protocols: A Probabilistic Logic Perspective

no code implementations8 Jul 2022 Sejin Seo, Jihong Park, Seung-Woo Ko, Jinho Choi, Mehdi Bennis, Seong-Lyun Kim

Classical medium access control (MAC) protocols are interpretable, yet their task-agnostic control signaling messages (CMs) are ill-suited for emerging mission-critical applications.

Collision Avoidance

Two-Stage Deep Anomaly Detection with Heterogeneous Time Series Data

no code implementations10 Feb 2022 Kyeong-Joong Jeong, Jin-Duk Park, Kyusoon Hwang, Seong-Lyun Kim, Won-Yong Shin

We introduce a data-driven anomaly detection framework using a manufacturing dataset collected from a factory assembly line.

Anomaly Detection Time Series +2

Communication-Efficient and Personalized Federated Lottery Ticket Learning

no code implementations26 Apr 2021 Sejin Seo, Seung-Woo Ko, Jihong Park, Seong-Lyun Kim, Mehdi Bennis

The lottery ticket hypothesis (LTH) claims that a deep neural network (i. e., ground network) contains a number of subnetworks (i. e., winning tickets), each of which exhibiting identically accurate inference capability as that of the ground network.

Federated Learning Multi-Task Learning

Federated Knowledge Distillation

4 code implementations4 Nov 2020 Hyowoon Seo, Jihong Park, Seungeun Oh, Mehdi Bennis, Seong-Lyun Kim

The goal of this chapter is to provide a deep understanding of FD while demonstrating its communication efficiency and applicability to a variety of tasks.

Federated Learning Knowledge Distillation

Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup

no code implementations17 Jun 2020 Seungeun Oh, Jihong Park, Eunjeong Jeong, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD.

Federated Learning Privacy Preserving

XOR Mixup: Privacy-Preserving Data Augmentation for One-Shot Federated Learning

no code implementations9 Jun 2020 MyungJae Shin, Chihoon Hwang, Joongheon Kim, Jihong Park, Mehdi Bennis, Seong-Lyun Kim

User-generated data distributions are often imbalanced across devices and labels, hampering the performance of federated learning (FL).

Data Augmentation Federated Learning +1

Understanding Uncertainty of Edge Computing: New Principle and Design Approach

no code implementations1 Jun 2020 Sejin Seo, Sang Won Choi, Sujin Kook, Seong-Lyun Kim, Seung-Woo Ko

Due to the edge's position between the cloud and the users, and the recent surge of deep neural network (DNN) applications, edge computing brings about uncertainties that must be understood separately.

Information Theory Networking and Internet Architecture Information Theory

Proxy Experience Replay: Federated Distillation for Distributed Reinforcement Learning

no code implementations13 May 2020 Han Cha, Jihong Park, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

Traditional distributed deep reinforcement learning (RL) commonly relies on exchanging the experience replay memory (RM) of each agent.

Clustering Data Augmentation +3

Multi-hop Federated Private Data Augmentation with Sample Compression

no code implementations15 Jul 2019 Eunjeong Jeong, Seungeun Oh, Jihong Park, Hyesung Kim, Mehdi Bennis, Seong-Lyun Kim

On-device machine learning (ML) has brought about the accessibility to a tremendous amount of data from the users while keeping their local data private instead of storing it in a central entity.

Data Augmentation

Federated Reinforcement Distillation with Proxy Experience Memory

no code implementations15 Jul 2019 Han Cha, Jihong Park, Hyesung Kim, Seong-Lyun Kim, Mehdi Bennis

In distributed reinforcement learning, it is common to exchange the experience memory of each agent and thereby collectively train their local models.

Privacy Preserving reinforcement-learning +1

Blockchained On-Device Federated Learning

2 code implementations12 Aug 2018 Hyesung Kim, Jihong Park, Mehdi Bennis, Seong-Lyun Kim

By leveraging blockchain, this letter proposes a blockchained federated learning (BlockFL) architecture where local learning model updates are exchanged and verified.

Information Theory Networking and Internet Architecture Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.