Search Results for author: Hyeji Kim

Found 19 papers, 11 papers with code

TinyTurbo: Efficient Turbo Decoders on Edge

1 code implementation30 Sep 2022 S Ashwin Hebbar, Rajesh K Mishra, Sravan Kumar Ankireddy, Ashok V Makkuva, Hyeji Kim, Pramod Viswanath

In this paper, we introduce a neural-augmented decoder for Turbo codes called TINYTURBO .

Neural Augmented Min-Sum Decoding of Short Block Codes for Fading Channels

no code implementations21 May 2022 Sravan Kumar Ankireddy, Hyeji Kim

The second is the interpretation of the weights learned and their effect on the reliability of the BP decoder.

DeepIC: Coding for Interference Channels via Deep Learning

no code implementations13 Aug 2021 Karl Chahine, Nanyang Ye, Hyeji Kim

Interestingly, it is shown that there exists an asymptotic scheme, called Han-Kobayashi scheme, that performs better than TD and TIN.

A Channel Coding Benchmark for Meta-Learning

1 code implementation15 Jul 2021 Rui Li, Ondrej Bohdal, Rajesh Mishra, Hyeji Kim, Da Li, Nicholas Lane, Timothy Hospedales

We use our MetaCC benchmark to study several aspects of meta-learning, including the impact of task distribution breadth and shift, which can be controlled in the coding problem.


Neural Distributed Source Coding

no code implementations5 Jun 2021 Jay Whang, Anish Acharya, Hyeji Kim, Alexandros G. Dimakis

Distributed source coding (DSC) is the task of encoding an input in the absence of correlated side information that is only available to the decoder.

Deepcode and Modulo-SK are Designed for Different Settings

no code implementations18 Aug 2020 Hyeji Kim, Yihan Jiang, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

DeepCode is designed and evaluated for the AWGN channel with (potentially delayed) uncoded output feedback.

HAPI: Hardware-Aware Progressive Inference

no code implementations10 Aug 2020 Stefanos Laskaridis, Stylianos I. Venieris, Hyeji Kim, Nicholas D. Lane

Convolutional neural networks (CNNs) have recently become the state-of-the-art in a diversity of AI tasks.

BRP-NAS: Prediction-based NAS using GCNs

2 code implementations NeurIPS 2020 Łukasz Dudziak, Thomas Chau, Mohamed S. Abdelfattah, Royson Lee, Hyeji Kim, Nicholas D. Lane

What is more, we investigate prediction quality on different metrics and show that sample efficiency of the predictor-based NAS can be improved by considering binary relations of models and an iterative data selection strategy.

Neural Architecture Search

Journey Towards Tiny Perceptual Super-Resolution

2 code implementations ECCV 2020 Royson Lee, Łukasz Dudziak, Mohamed Abdelfattah, Stylianos I. Venieris, Hyeji Kim, Hongkai Wen, Nicholas D. Lane

Recent works in single-image perceptual super-resolution (SR) have demonstrated unprecedented performance in generating realistic textures by means of deep convolutional networks.

Neural Architecture Search Super-Resolution

Best of Both Worlds: AutoML Codesign of a CNN and its Hardware Accelerator

no code implementations11 Feb 2020 Mohamed S. Abdelfattah, Łukasz Dudziak, Thomas Chau, Royson Lee, Hyeji Kim, Nicholas D. Lane

We automate HW-CNN codesign using NAS by including parameters from both the CNN model and the HW accelerator, and we jointly search for the best model-accelerator pair that boosts accuracy and efficiency.

General Classification Image Classification +2

Turbo Autoencoder: Deep learning based channel codes for point-to-point communication channels

1 code implementation NeurIPS 2019 Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

Designing codes that combat the noise in a communication medium has remained a significant area of research in information theory as well as wireless communications.

DeepTurbo: Deep Turbo Decoder

1 code implementation6 Mar 2019 Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

We focus on Turbo codes and propose DeepTurbo, a novel deep learning based architecture for Turbo decoding.

LEARN Codes: Inventing Low-latency Codes via Recurrent Neural Networks

1 code implementation30 Nov 2018 Yihan Jiang, Hyeji Kim, Himanshu Asnani, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

Designing channel codes under low-latency constraints is one of the most demanding requirements in 5G standards.

Efficient Neural Network Compression

1 code implementation CVPR 2019 Hyeji Kim, Muhammad Umar Karim Khan, Chong-Min Kyung

The better accuracy and complexity compromise, as well as the extremely fast speed of our method makes it suitable for neural network compression.

Neural Network Compression

Deepcode: Feedback Codes via Deep Learning

1 code implementation NeurIPS 2018 Hyeji Kim, Yihan Jiang, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide-ranging practical applications.

Automatic Rank Selection for High-Speed Convolutional Neural Network

no code implementations28 Jun 2018 Hyeji Kim, Chong-Min Kyung

In this paper, we define rank selection as a combinatorial optimization problem and propose a methodology to minimize network complexity while maintaining the desired accuracy.

Combinatorial Optimization

Communication Algorithms via Deep Learning

3 code implementations ICLR 2018 Hyeji Kim, Yihan Jiang, Ranvir Rana, Sreeram Kannan, Sewoong Oh, Pramod Viswanath

We show that creatively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms).

Cannot find the paper you are looking for? You can Submit a new open access paper.