Search Results for author: Seung Hwan Kim

Found 13 papers, 6 papers with code

Enriched CNN-Transformer Feature Aggregation Networks for Super-Resolution

1 code implementation15 Mar 2022 Jinsu Yoo, TaeHoon Kim, Sihaeng Lee, Seung Hwan Kim, Honglak Lee, Tae Hyun Kim

Recent transformer-based super-resolution (SR) methods have achieved promising results against conventional CNN-based methods.

Image Restoration Super-Resolution

ReConPatch : Contrastive Patch Representation Learning for Industrial Anomaly Detection

1 code implementation26 May 2023 Jeeho Hyun, Sangyun Kim, Giyoung Jeon, Seung Hwan Kim, Kyunghoon Bae, Byung Jun Kang

In this paper, we introduce ReConPatch, which constructs discriminative features for anomaly detection by training a linear modulation of patch features extracted from the pre-trained model.

Anomaly Detection Contrastive Learning +2

Large-Scale Bidirectional Training for Zero-Shot Image Captioning

1 code implementation13 Nov 2022 TaeHoon Kim, Mark Marsden, Pyunghwan Ahn, Sangyun Kim, Sihaeng Lee, Alessandra Sala, Seung Hwan Kim

However, we find that large-scale bidirectional training between image and text enables zero-shot image captioning.

Image Captioning Keyword Extraction

Universal Noise Annotation: Unveiling the Impact of Noisy annotation on Object Detection

1 code implementation21 Dec 2023 Kwangrok Ryoo, Yeonsik Jo, Seungjun Lee, Mira Kim, Ahra Jo, Seung Hwan Kim, Seungryong Kim, Soonyoung Lee

For object detection task with noisy labels, it is important to consider not only categorization noise, as in image classification, but also localization noise, missing annotations, and bogus bounding boxes.

Image Classification Object +2

Story Visualization by Online Text Augmentation with Context Memory

1 code implementation ICCV 2023 Daechul Ahn, Daneul Kim, Gwangmo Song, Seung Hwan Kim, Honglak Lee, Dongyeop Kang, Jonghyun Choi

Story visualization (SV) is a challenging text-to-image generation task for the difficulty of not only rendering visual details from the text descriptions but also encoding a long-term context across multiple sentences.

Sentence Story Visualization +2

DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning

no code implementations17 Aug 2022 Hyounguk Shon, Janghyeon Lee, Seung Hwan Kim, Junmo Kim

We show that this allows us to design a linear model where quadratic parameter regularization method is placed as the optimal continual learning policy, and at the same time enjoying the high performance of neural networks.

Class Incremental Learning Image Classification +1

Factors that affect the technological transition of firms toward the industry 4.0 technologies

no code implementations6 Sep 2022 Seung Hwan Kim, Jeong hwan Jeon, Anwar Aridi, Bogang Jun

Using the technology space of firms, we can identify firms that successfully develop a new industry 4. 0 technology and examine whether their accumulated capabilities in their previous technology domains positively affect their technological diversification and which factors play a critical role in their transition towards industry 4. 0.

UniCLIP: Unified Framework for Contrastive Language-Image Pre-training

no code implementations27 Sep 2022 Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim

Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.

Significantly Improving Zero-Shot X-ray Pathology Classification via Fine-tuning Pre-trained Image-Text Encoders

no code implementations14 Dec 2022 Jongseong Jang, Daeun Kyung, Seung Hwan Kim, Honglak Lee, Kyunghoon Bae, Edward Choi

However, large-scale and high-quality data to train powerful neural networks are rare in the medical domain as the labeling must be done by qualified experts.

Classification Contrastive Learning +2

Misalign, Contrast then Distill: Rethinking Misalignments in Language-Image Pretraining

no code implementations19 Dec 2023 Bumsoo Kim, Yeonsik Jo, Jinhyung Kim, Seung Hwan Kim

Contrastive Language-Image Pretraining has emerged as a prominent approach for training vision and text encoders with uncurated image-text pairs from the web.

Image Augmentation Metric Learning +1

Expediting Contrastive Language-Image Pretraining via Self-distilled Encoders

no code implementations19 Dec 2023 Bumsoo Kim, Jinhyung Kim, Yeonsik Jo, Seung Hwan Kim

Based on the unified text embedding space, ECLIPSE compensates for the additional computational cost of the momentum image encoder by expediting the online image encoder.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.