no code implementations • 2 Apr 2024 • Hyunjong Ok, Taeho Kil, Sukmin Seo, Jaeho Lee
Our approach demonstrates competitive performance on the NER benchmark and surpasses existing methods on both MNER and GMNER benchmarks.
Multi-modal Named Entity Recognition named-entity-recognition +2
no code implementations • 5 Mar 2024 • Hagyeong Lee, Minkyu Kim, Jun-Hyuk Kim, Seungeon Kim, Dokwan Oh, Jaeho Lee
Recent advances in text-guided image compression have shown great potential to enhance the perceptual quality of reconstructed images.
no code implementations • 23 Feb 2024 • Jiwoong Im, Nayoung Kwon, Taewoo Park, Jiheon Woo, Jaeho Lee, Yongjune Kim
In our framework, the lightweight ViT model on the edge device acts as a semantic encoder, efficiently identifying and selecting the crucial image information required for the classification task.
no code implementations • 5 Feb 2024 • Hyomin Kim, Yunhui Jang, Jaeho Lee, Sungsoo Ahn
In this paper, we study hybrid neural representations for spherical data, a domain of increasing relevance in scientific research.
1 code implementation • 28 Nov 2023 • Junwon Seo, Sangyoon Lee, Kwang In Kim, Jaeho Lee
Neural field is an emerging paradigm in data representation that trains a neural network to approximate the given signal.
no code implementations • 21 Nov 2023 • Yasaman Pedari, Jaeho Lee, Yongsoon Eun, Hamid Ossareh
Over the past two decades, there has been a growing interest in control systems research to transition from model-based methods to data-driven approaches.
no code implementations • 20 Jul 2023 • Yongjeong Oh, Jaeho Lee, Christopher G. Brinton, Yo-Seb Jeon
In the second strategy, the non-dropped intermediate feature and gradient vectors are quantized using adaptive quantization levels determined based on the ranges of the vectors.
no code implementations • 22 Feb 2023 • Jiwoon Lee, Jaeho Lee
Deep models are susceptible to learning spurious correlations, even during the post-processing.
no code implementations • 21 Feb 2023 • Seungwoo Son, Namhoon Lee, Jaeho Lee
We present MaskedKD, a simple yet effective strategy that can significantly reduce the cost of distilling ViTs without sacrificing the prediction accuracy of the student model.
1 code implementation • 26 Jan 2023 • Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, Jinwoo Shin
The keyword explanation form of visual bias offers several advantages, such as a clear group naming for bias discovery and a natural extension for debiasing using these group names.
no code implementations • 23 Jan 2023 • Jonathan Richard Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, Jinwoo Shin
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR).
no code implementations • 5 Dec 2022 • Junhyun Nam, Sangwoo Mo, Jaeho Lee, Jinwoo Shin
(a) Fairness Intervention (FI): emphasize the minority samples that are hard to generate due to the spurious correlation in the training dataset.
no code implementations • 14 Nov 2022 • Yongsoon Eun, Jaeho Lee, Hyungbo Shim
Specifically, the problem addressed here is to find an input sequence from the corresponding output sequence based on pre-collected input and output data.
1 code implementation • 13 Oct 2022 • Subin Kim, Sihyun Yu, Jaeho Lee, Jinwoo Shin
Succinct representation of complex signals using coordinate-based neural representations (CNRs) has seen great progress, and several recent efforts focus on extending them for handling videos.
1 code implementation • 11 Oct 2022 • Jihoon Tack, Jongjin Park, Hankook Lee, Jaeho Lee, Jinwoo Shin
The idea of using a separately trained target model (or teacher) to improve the performance of the student model has been increasingly popular in various machine learning domains, and meta-learning is no exception; a recent discovery shows that utilizing task-wise target models can significantly boost the generalization performance.
no code implementations • 31 May 2022 • Youngsik Yoon, Jinhwan Nam, Hyojeong Yun, Jaeho Lee, Dongwoo Kim, Jungseul Ok
We consider a practical scenario of machine unlearning to erase a target dataset, which causes unexpected behavior from the trained model.
no code implementations • ICLR 2022 • Junhyun Nam, Jaehyung Kim, Jaeho Lee, Jinwoo Shin
The paradigm of worst-group loss minimization has shown its promise in avoiding to learn spurious correlations, but requires costly additional supervision on spurious attributes.
no code implementations • 5 Apr 2022 • Chaewon Kim, Jaeho Lee, Jinwoo Shin
Recent denoising algorithms based on the "blind-spot" strategy show impressive blind image denoising performances, without utilizing any external dataset.
1 code implementation • NeurIPS 2021 • Jaeho Lee, Jihoon Tack, Namhoon Lee, Jinwoo Shin
Implicit neural representations are a promising new avenue of representing general signals by learning a continuous function that, parameterized as a neural network, maps the domain of a signal to its codomain; the mapping from spatial coordinates of an image to its pixel values, for example.
2 code implementations • 28 Jun 2021 • Hyuntak Cha, Jaeho Lee, Jinwoo Shin
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than joint-training methods relying on task-specific supervision.
1 code implementation • ICCV 2021 • Hyuntak Cha, Jaeho Lee, Jinwoo Shin
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than cross-entropy based methods which rely on task-specific supervision.
1 code implementation • 17 Dec 2020 • Seung Jun Moon, Sangwoo Mo, Kimin Lee, Jaeho Lee, Jinwoo Shin
We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context.
no code implementations • NeurIPS 2020 • Junhyun Nam, Hyuntak Cha, Sung-Soo Ahn, Jaeho Lee, Jinwoo Shin
Neural networks often learn to make predictions that overly rely on spurious corre- lation existing in the dataset, which causes the model to be biased.
no code implementations • 26 Oct 2020 • Sejun Park, Jaeho Lee, Chulhee Yun, Jinwoo Shin
It is known that $O(N)$ parameters are sufficient for neural networks to memorize arbitrary $N$ input-label pairs.
1 code implementation • ICLR 2021 • Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance.
2 code implementations • 6 Jul 2020 • Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, Jinwoo Shin
Neural networks often learn to make predictions that overly rely on spurious correlation existing in the dataset, which causes the model to be biased.
Ranked #1 on Out-of-Distribution Generalization on ImageNet-W
no code implementations • ICLR 2021 • Sejun Park, Chulhee Yun, Jaeho Lee, Jinwoo Shin
In this work, we provide the first definitive result in this direction for networks using the ReLU activation functions: The minimum width required for the universal approximation of the $L^p$ functions is exactly $\max\{d_x+1, d_y\}$.
1 code implementation • NeurIPS 2020 • Jaeho Lee, Sejun Park, Jinwoo Shin
The second result, based on a novel variance-based characterization of OCE, gives an expected loss guarantee with a suppressed dependence on the smoothness of the selected OCE.
1 code implementation • ICLR 2020 • Sejun Park, Jaeho Lee, Sangwoo Mo, Jinwoo Shin
Magnitude-based pruning is one of the simplest methods for pruning neural networks.
no code implementations • 23 Dec 2018 • Jaeho Lee, Maxim Raginsky
This paper generalizes the Maurer--Pontil framework of finite-dimensional lossy coding schemes to the setting where a high-dimensional random vector is mapped to an element of a compact set of latent representations in a lower-dimensional Euclidean space, and the reconstruction map belongs to a given class of nonlinear maps.
no code implementations • NeurIPS 2018 • Jaeho Lee, Maxim Raginsky
As opposed to standard empirical risk minimization (ERM), distributionally robust optimization aims to minimize the worst-case risk over a larger ambiguity set containing the original empirical distribution of the training data.