Search Results for author: Jaeho Lee

Found 31 papers, 12 papers with code

Neural Image Compression with Text-guided Encoding for both Pixel-level and Perceptual Fidelity

no code implementations5 Mar 2024 Hagyeong Lee, Minkyu Kim, Jun-Hyuk Kim, Seungeon Kim, Dokwan Oh, Jaeho Lee

Recent advances in text-guided image compression have shown great potential to enhance the perceptual quality of reconstructed images.

Image Compression

Attention-aware Semantic Communications for Collaborative Inference

no code implementations23 Feb 2024 Jiwoong Im, Nayoung Kwon, Taewoo Park, Jiheon Woo, Jaeho Lee, Yongjune Kim

In our framework, the lightweight ViT model on the edge device acts as a semantic encoder, efficiently identifying and selecting the crucial image information required for the classification task.

Collaborative Inference

Hybrid Neural Representations for Spherical Data

no code implementations5 Feb 2024 Hyomin Kim, Yunhui Jang, Jaeho Lee, Sungsoo Ahn

In this paper, we study hybrid neural representations for spherical data, a domain of increasing relevance in scientific research.

Super-Resolution

In Search of a Data Transformation That Accelerates Neural Field Training

1 code implementation28 Nov 2023 Junwon Seo, Sangyoon Lee, Kwang In Kim, Jaeho Lee

Neural field is an emerging paradigm in data representation that trains a neural network to approximate the given signal.

Data-driven System Interconnections and a Novel Data-enabled Internal Model Control

no code implementations21 Nov 2023 Yasaman Pedari, Jaeho Lee, Yongsoon Eun, Hamid Ossareh

Over the past two decades, there has been a growing interest in control systems research to transition from model-based methods to data-driven approaches.

LEMMA

Communication-Efficient Split Learning via Adaptive Feature-Wise Compression

no code implementations20 Jul 2023 Yongjeong Oh, Jaeho Lee, Christopher G. Brinton, Yo-Seb Jeon

In the second strategy, the non-dropped intermediate feature and gradient vectors are quantized using adaptive quantization levels determined based on the ranges of the vectors.

Quantization

Debiased Distillation by Transplanting the Last Layer

no code implementations22 Feb 2023 Jiwoon Lee, Jaeho Lee

Deep models are susceptible to learning spurious correlations, even during the post-processing.

Attribute Knowledge Distillation +1

MaskedKD: Efficient Distillation of Vision Transformers with Masked Images

no code implementations21 Feb 2023 Seungwoo Son, Namhoon Lee, Jaeho Lee

We present MaskedKD, a simple yet effective strategy that can significantly reduce the cost of distilling ViTs without sacrificing the prediction accuracy of the student model.

Knowledge Distillation

Discovering and Mitigating Visual Biases through Keyword Explanation

1 code implementation26 Jan 2023 Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, Jinwoo Shin

The keyword explanation form of visual bias offers several advantages, such as a clear group naming for bias discovery and a natural extension for debiasing using these group names.

Image Classification Image Generation

Modality-Agnostic Variational Compression of Implicit Neural Representations

no code implementations23 Jan 2023 Jonathan Richard Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, Jinwoo Shin

We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR).

Data Compression

Breaking the Spurious Causality of Conditional Generation via Fairness Intervention with Corrective Sampling

no code implementations5 Dec 2022 Junhyun Nam, Sangwoo Mo, Jaeho Lee, Jinwoo Shin

(a) Fairness Intervention (FI): emphasize the minority samples that are hard to generate due to the spurious correlation in the training dataset.

Attribute Fairness

Data-Driven Inverse of Linear Systems and Application to Disturbance Observers

no code implementations14 Nov 2022 Yongsoon Eun, Jaeho Lee, Hyungbo Shim

Specifically, the problem addressed here is to find an input sequence from the corresponding output sequence based on pre-collected input and output data.

Scalable Neural Video Representations with Learnable Positional Features

1 code implementation13 Oct 2022 Subin Kim, Sihyun Yu, Jaeho Lee, Jinwoo Shin

Succinct representation of complex signals using coordinate-based neural representations (CNRs) has seen great progress, and several recent efforts focus on extending them for handling videos.

Video Compression Video Frame Interpolation +2

Meta-Learning with Self-Improving Momentum Target

1 code implementation11 Oct 2022 Jihoon Tack, Jongjin Park, Hankook Lee, Jaeho Lee, Jinwoo Shin

The idea of using a separately trained target model (or teacher) to improve the performance of the student model has been increasingly popular in various machine learning domains, and meta-learning is no exception; a recent discovery shows that utilizing task-wise target models can significantly boost the generalization performance.

Knowledge Distillation Meta-Learning +1

Few-Shot Unlearning by Model Inversion

no code implementations31 May 2022 Youngsik Yoon, Jinhwan Nam, Hyojeong Yun, Jaeho Lee, Dongwoo Kim, Jungseul Ok

We consider a practical scenario of machine unlearning to erase a target dataset, which causes unexpected behavior from the trained model.

Machine Unlearning

Spread Spurious Attribute: Improving Worst-group Accuracy with Spurious Attribute Estimation

no code implementations ICLR 2022 Junhyun Nam, Jaehyung Kim, Jaeho Lee, Jinwoo Shin

The paradigm of worst-group loss minimization has shown its promise in avoiding to learn spurious correlations, but requires costly additional supervision on spurious attributes.

Attribute

Zero-shot Blind Image Denoising via Implicit Neural Representations

no code implementations5 Apr 2022 Chaewon Kim, Jaeho Lee, Jinwoo Shin

Recent denoising algorithms based on the "blind-spot" strategy show impressive blind image denoising performances, without utilizing any external dataset.

Image Denoising Inductive Bias

Meta-Learning Sparse Implicit Neural Representations

1 code implementation NeurIPS 2021 Jaeho Lee, Jihoon Tack, Namhoon Lee, Jinwoo Shin

Implicit neural representations are a promising new avenue of representing general signals by learning a continuous function that, parameterized as a neural network, maps the domain of a signal to its codomain; the mapping from spatial coordinates of an image to its pixel values, for example.

Meta-Learning

Co$^2$L: Contrastive Continual Learning

2 code implementations28 Jun 2021 Hyuntak Cha, Jaeho Lee, Jinwoo Shin

Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than joint-training methods relying on task-specific supervision.

Continual Learning Contrastive Learning +2

Co2L: Contrastive Continual Learning

1 code implementation ICCV 2021 Hyuntak Cha, Jaeho Lee, Jinwoo Shin

Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than cross-entropy based methods which rely on task-specific supervision.

Continual Learning Contrastive Learning +2

MASKER: Masked Keyword Regularization for Reliable Text Classification

1 code implementation17 Dec 2020 Seung Jun Moon, Sangwoo Mo, Kimin Lee, Jaeho Lee, Jinwoo Shin

We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context.

Domain Generalization General Classification +6

Learning from Failure: De-biasing Classifier from Biased Classifier

no code implementations NeurIPS 2020 Junhyun Nam, Hyuntak Cha, Sung-Soo Ahn, Jaeho Lee, Jinwoo Shin

Neural networks often learn to make predictions that overly rely on spurious corre- lation existing in the dataset, which causes the model to be biased.

Provable Memorization via Deep Neural Networks using Sub-linear Parameters

no code implementations26 Oct 2020 Sejun Park, Jaeho Lee, Chulhee Yun, Jinwoo Shin

It is known that $O(N)$ parameters are sufficient for neural networks to memorize arbitrary $N$ input-label pairs.

Memorization

Layer-adaptive sparsity for the Magnitude-based Pruning

1 code implementation ICLR 2021 Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin

Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance.

Image Classification Network Pruning

Learning from Failure: Training Debiased Classifier from Biased Classifier

2 code implementations6 Jul 2020 Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, Jinwoo Shin

Neural networks often learn to make predictions that overly rely on spurious correlation existing in the dataset, which causes the model to be biased.

Action Recognition Facial Attribute Classification +1

Minimum Width for Universal Approximation

no code implementations ICLR 2021 Sejun Park, Chulhee Yun, Jaeho Lee, Jinwoo Shin

In this work, we provide the first definitive result in this direction for networks using the ReLU activation functions: The minimum width required for the universal approximation of the $L^p$ functions is exactly $\max\{d_x+1, d_y\}$.

Learning Bounds for Risk-sensitive Learning

1 code implementation NeurIPS 2020 Jaeho Lee, Sejun Park, Jinwoo Shin

The second result, based on a novel variance-based characterization of OCE, gives an expected loss guarantee with a suppressed dependence on the smoothness of the selected OCE.

Lookahead: a Far-Sighted Alternative of Magnitude-based Pruning

1 code implementation ICLR 2020 Sejun Park, Jaeho Lee, Sangwoo Mo, Jinwoo Shin

Magnitude-based pruning is one of the simplest methods for pruning neural networks.

Learning finite-dimensional coding schemes with nonlinear reconstruction maps

no code implementations23 Dec 2018 Jaeho Lee, Maxim Raginsky

This paper generalizes the Maurer--Pontil framework of finite-dimensional lossy coding schemes to the setting where a high-dimensional random vector is mapped to an element of a compact set of latent representations in a lower-dimensional Euclidean space, and the reconstruction map belongs to a given class of nonlinear maps.

Generalization Bounds Representation Learning

Minimax Statistical Learning with Wasserstein Distances

no code implementations NeurIPS 2018 Jaeho Lee, Maxim Raginsky

As opposed to standard empirical risk minimization (ERM), distributionally robust optimization aims to minimize the worst-case risk over a larger ambiguity set containing the original empirical distribution of the training data.

Domain Adaptation Generalization Bounds

Cannot find the paper you are looking for? You can Submit a new open access paper.