Search Results for author: Inseop Chung

Found 12 papers, 2 papers with code

Feature Fusion for Online Mutual Knowledge Distillation

1 code implementation19 Apr 2019 Jangho Kim, Minsung Hyun, Inseop Chung, Nojun Kwak

We propose a learning framework named Feature Fusion Learning (FFL) that efficiently trains a powerful classifier through a fusion module which combines the feature maps generated from parallel neural networks.

Knowledge Distillation

Training Multi-Object Detector by Estimating Bounding Box Distribution for Input Image

3 code implementations ICCV 2021 Jaeyoung Yoo, Hojun Lee, Inseop Chung, Geonseok Seo, Nojun Kwak

Instead of assigning each ground truth to specific locations of network's output, we train a network by estimating the probability density of bounding boxes in an input image using a mixture model.

Density Estimation Object +2

Feature-map-level Online Adversarial Knowledge Distillation

no code implementations ICML 2020 Inseop Chung, SeongUk Park, Jangho Kim, Nojun Kwak

By training a network to fool the corresponding discriminator, it can learn the other network's feature map distribution.

Knowledge Distillation

Maximizing Cosine Similarity Between Spatial Features for Unsupervised Domain Adaptation in Semantic Segmentation

no code implementations25 Feb 2021 Inseop Chung, Daesik Kim, Nojun Kwak

We propose a novel method that tackles the problem of unsupervised domain adaptation for semantic segmentation by maximizing the cosine similarity between the source and the target domain at the feature level.

Segmentation Semantic Segmentation +1

End-to-End Multi-Object Detection with a Regularized Mixture Model

no code implementations18 May 2022 Jaeyoung Yoo, Hojun Lee, Seunghyeon Seo, Inseop Chung, Nojun Kwak

Recent end-to-end multi-object detectors simplify the inference pipeline by removing hand-crafted processes such as non-maximum suppression (NMS).

Density Estimation Object +2

Dummy Prototypical Networks for Few-Shot Open-Set Keyword Spotting

no code implementations28 Jun 2022 Byeonggeun Kim, Seunghan Yang, Inseop Chung, Simyung Chang

We also verify our method on a standard benchmark, miniImageNet, and D-ProtoNets shows the state-of-the-art open-set detection rate in FSOSR.

Keyword Spotting Metric Learning +1

Personalized Keyword Spotting through Multi-task Learning

no code implementations28 Jun 2022 Seunghan Yang, Byeonggeun Kim, Inseop Chung, Simyung Chang

We design two personalized KWS tasks; (1) Target user Biased KWS (TB-KWS) and (2) Target user Only KWS (TO-KWS).

Keyword Spotting Multi-Task Learning +1

Unsupervised Domain Adaptation for One-stage Object Detector using Offsets to Bounding Box

no code implementations20 Jul 2022 Jayeon Yoo, Inseop Chung, Nojun Kwak

Most existing domain adaptive object detection methods exploit adversarial feature alignment to adapt the model to a new domain.

object-detection Object Detection +1

Open Domain Generalization with a Single Network by Regularization Exploiting Pre-trained Features

no code implementations8 Dec 2023 Inseop Chung, KiYoon Yoo, Nojun Kwak

To handle this task, the model has to learn a generalizable representation that can be applied to unseen domains while also identify unknown classes that were not present during training.

Domain Generalization

What, How, and When Should Object Detectors Update in Continually Changing Test Domains?

no code implementations12 Dec 2023 Jayeon Yoo, Dongkwan Lee, Inseop Chung, Donghyun Kim, Nojun Kwak

It is a well-known fact that the performance of deep learning models deteriorates when they encounter a distribution shift at test time.

object-detection Object Detection +1

Mitigating the Bias in the Model for Continual Test-Time Adaptation

no code implementations2 Mar 2024 Inseop Chung, Kyomin Hwang, Jayeon Yoo, Nojun Kwak

Continual Test-Time Adaptation (CTA) is a challenging task that aims to adapt a source pre-trained model to continually changing target domains.

Test-time Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.