Search Results for author: Hyeji Kim

Found 30 papers, 19 papers with code

Importance Sampling via Score-based Generative Models

no code implementations7 Feb 2025 Heasung Kim, Taekyun Lee, Hyeji Kim, Gustavo de Veciana

Importance sampling, which involves sampling from a probability density function (PDF) proportional to the product of an importance weight function and a base PDF, is a powerful technique with applications in variance reduction, biased or customized sampling, data augmentation, and beyond.

Data Augmentation

Neural Cover Selection for Image Steganography

1 code implementation23 Oct 2024 Karl Chahine, Hyeji Kim

In steganography, selecting an optimal cover image, referred to as cover selection, is pivotal for effective message concealment.

Image Steganography

Generating High Dimensional User-Specific Wireless Channels using Diffusion Models

no code implementations5 Sep 2024 Taekyun Lee, Juseong Park, Hyeji Kim, Jeffrey G. Andrews

Deep neural network (DNN)-based algorithms are emerging as an important tool for many physical and MAC layer functions in future wireless communication systems, including for large multi-antenna channels.

Denoising

Enhancing K-user Interference Alignment for Discrete Constellations via Learning

no code implementations21 Jul 2024 Rajesh Mishra, Syed Jafar, Sriram Vishwanath, Hyeji Kim

We propose a novel deep learning-based approach to design the encoder and decoder functions that aim to maximize the sumrate of the interference channel for discrete constellations.

Decoder

Local to Global: Learning Dynamics and Effect of Initialization for Transformers

1 code implementation5 Jun 2024 Ashok Vardhan Makkuva, Marco Bondaschi, Chanakya Ekbote, Adway Girish, Alliot Nagle, Hyeji Kim, Michael Gastpar

In this paper, we address this by focusing on first-order Markov chains and single-layer transformers, providing a comprehensive characterization of the learning dynamics in this context.

LightCode: Light Analytical and Neural Codes for Channels with Feedback

1 code implementation16 Mar 2024 Sravan Kumar Ankireddy, Krishna Narayanan, Hyeji Kim

The design of reliable and efficient codes for channels with feedback remains a longstanding challenge in communication theory.

Attention with Markov: A Framework for Principled Analysis of Transformers via Markov Chains

1 code implementation6 Feb 2024 Ashok Vardhan Makkuva, Marco Bondaschi, Adway Girish, Alliot Nagle, Martin Jaggi, Hyeji Kim, Michael Gastpar

Inspired by the Markovianity of natural languages, we model the data as a Markovian source and utilize this framework to systematically study the interplay between the data-distributional properties, the transformer architecture, the learnt distribution, and the final model performance.

TinyTurbo: Efficient Turbo Decoders on Edge

1 code implementation30 Sep 2022 S Ashwin Hebbar, Rajesh K Mishra, Sravan Kumar Ankireddy, Ashok V Makkuva, Hyeji Kim, Pramod Viswanath

In this paper, we introduce a neural-augmented decoder for Turbo codes called TINYTURBO .

Decoder

Interpreting Neural Min-Sum Decoders

1 code implementation21 May 2022 Sravan Kumar Ankireddy, Hyeji Kim

In decoding linear block codes, it was shown that noticeable reliability gains can be achieved by introducing learnable parameters to the Belief Propagation (BP) decoder.

Decoder

DeepIC: Coding for Interference Channels via Deep Learning

no code implementations13 Aug 2021 Karl Chahine, Nanyang Ye, Hyeji Kim

Interestingly, it is shown that there exists an asymptotic scheme, called Han-Kobayashi scheme, that performs better than TD and TIN.

Decoder Deep Learning

A Channel Coding Benchmark for Meta-Learning

1 code implementation15 Jul 2021 Rui Li, Ondrej Bohdal, Rajesh Mishra, Hyeji Kim, Da Li, Nicholas Lane, Timothy Hospedales

We use our MetaCC benchmark to study several aspects of meta-learning, including the impact of task distribution breadth and shift, which can be controlled in the coding problem.

Meta-Learning

Neural Distributed Source Coding

1 code implementation5 Jun 2021 Jay Whang, Alliot Nagle, Anish Acharya, Hyeji Kim, Alexandros G. Dimakis

Distributed source coding (DSC) is the task of encoding an input in the absence of correlated side information that is only available to the decoder.

Decoder

Deepcode and Modulo-SK are Designed for Different Settings

no code implementations18 Aug 2020 Hyeji Kim, Yihan Jiang, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

DeepCode is designed and evaluated for the AWGN channel with (potentially delayed) uncoded output feedback.

HAPI: Hardware-Aware Progressive Inference

no code implementations10 Aug 2020 Stefanos Laskaridis, Stylianos I. Venieris, Hyeji Kim, Nicholas D. Lane

Convolutional neural networks (CNNs) have recently become the state-of-the-art in a diversity of AI tasks.

BRP-NAS: Prediction-based NAS using GCNs

2 code implementations NeurIPS 2020 Łukasz Dudziak, Thomas Chau, Mohamed S. Abdelfattah, Royson Lee, Hyeji Kim, Nicholas D. Lane

What is more, we investigate prediction quality on different metrics and show that sample efficiency of the predictor-based NAS can be improved by considering binary relations of models and an iterative data selection strategy.

Neural Architecture Search Prediction

Journey Towards Tiny Perceptual Super-Resolution

2 code implementations ECCV 2020 Royson Lee, Łukasz Dudziak, Mohamed Abdelfattah, Stylianos I. Venieris, Hyeji Kim, Hongkai Wen, Nicholas D. Lane

Recent works in single-image perceptual super-resolution (SR) have demonstrated unprecedented performance in generating realistic textures by means of deep convolutional networks.

Neural Architecture Search Super-Resolution

Best of Both Worlds: AutoML Codesign of a CNN and its Hardware Accelerator

no code implementations11 Feb 2020 Mohamed S. Abdelfattah, Łukasz Dudziak, Thomas Chau, Royson Lee, Hyeji Kim, Nicholas D. Lane

We automate HW-CNN codesign using NAS by including parameters from both the CNN model and the HW accelerator, and we jointly search for the best model-accelerator pair that boosts accuracy and efficiency.

General Classification Image Classification +3

Turbo Autoencoder: Deep learning based channel codes for point-to-point communication channels

1 code implementation NeurIPS 2019 Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

Designing codes that combat the noise in a communication medium has remained a significant area of research in information theory as well as wireless communications.

Decoder

DeepTurbo: Deep Turbo Decoder

1 code implementation6 Mar 2019 Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

We focus on Turbo codes and propose DeepTurbo, a novel deep learning based architecture for Turbo decoding.

Decoder

LEARN Codes: Inventing Low-latency Codes via Recurrent Neural Networks

1 code implementation30 Nov 2018 Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

Designing channel codes under low-latency constraints is one of the most demanding requirements in 5G standards.

Decoder

Efficient Neural Network Compression

1 code implementation CVPR 2019 Hyeji Kim, Muhammad Umar Karim Khan, Chong-Min Kyung

The better accuracy and complexity compromise, as well as the extremely fast speed of our method makes it suitable for neural network compression.

Efficient Neural Network Neural Network Compression

Deepcode: Feedback Codes via Deep Learning

1 code implementation NeurIPS 2018 Hyeji Kim, Yihan Jiang, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide-ranging practical applications.

Deep Learning

Automatic Rank Selection for High-Speed Convolutional Neural Network

no code implementations28 Jun 2018 Hyeji Kim, Chong-Min Kyung

In this paper, we define rank selection as a combinatorial optimization problem and propose a methodology to minimize network complexity while maintaining the desired accuracy.

Combinatorial Optimization Vocal Bursts Intensity Prediction

Communication Algorithms via Deep Learning

3 code implementations ICLR 2018 Hyeji Kim, Yihan Jiang, Ranvir Rana, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

We show that creatively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms).

Deep Learning Ingenuity

Cannot find the paper you are looking for? You can Submit a new open access paper.