Search Results for author: Dahuin Jung

Found 18 papers, 7 papers with code

Efficient Diffusion-Driven Corruption Editor for Test-Time Adaptation

no code implementations16 Mar 2024 Yeongtak Oh, Jonghyun Lee, Jooyoung Choi, Dahuin Jung, Uiwon Hwang, Sungroh Yoon

To address this, we propose a novel TTA method by leveraging a latent diffusion model (LDM) based image editing model and fine-tuning it with our newly introduced corruption modeling scheme.

Data Augmentation Test-time Adaptation

Entropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors

no code implementations12 Mar 2024 Jonghyun Lee, Dahuin Jung, Saehyung Lee, Junsung Park, Juhyeon Shin, Uiwon Hwang, Sungroh Yoon

To mitigate it, TTA methods have utilized the model output's entropy as a confidence metric that aims to determine which samples have a lower likelihood of causing error.

Object Pseudo Label +1

Probabilistic Concept Bottleneck Models

2 code implementations2 Jun 2023 Eunji Kim, Dahuin Jung, Sangha Park, Siwon Kim, Sungroh Yoon

To provide a reliable interpretation against this ambiguity, we propose Probabilistic Concept Bottleneck Models (ProbCBM).

Diffusion-Stego: Training-free Diffusion Generative Steganography via Message Projection

no code implementations30 May 2023 Daegyu Kim, Chaehun Shin, Jooyoung Choi, Dahuin Jung, Sungroh Yoon

Diffusion-Stego achieved a high capacity of messages (3. 0 bpp of binary messages with 98% accuracy, and 6. 0 bpp with 90% accuracy) as well as high quality (with a FID score of 2. 77 for 1. 0 bpp on the FFHQ 64$\times$64 dataset) that makes it challenging to distinguish from real images in the PNG format.

Denoising Image Generation

Sample-efficient Adversarial Imitation Learning

no code implementations14 Mar 2023 Dahuin Jung, Hyungyu Lee, Sungroh Yoon

In particular, in comparison with existing self-supervised learning methods for tabular data, we propose a different corruption method for state and action representations that is robust to diverse distortions.

Imitation Learning Representation Learning +1

New Insights for the Stability-Plasticity Dilemma in Online Continual Learning

1 code implementation17 Feb 2023 Dahuin Jung, Dongjin Lee, Sunwon Hong, Hyemi Jang, Ho Bae, Sungroh Yoon

The aim of continual learning is to learn new tasks continuously (i. e., plasticity) without forgetting previously learned knowledge from old tasks (i. e., stability).

Continual Learning

Generating Instance-level Prompts for Rehearsal-free Continual Learning

no code implementations ICCV 2023 Dahuin Jung, Dongyoon Han, Jihwan Bang, Hwanjun Song

However, we observe that the use of a prompt pool creates a domain scalability problem between pre-training and continual learning.

Continual Learning

FedClassAvg: Local Representation Learning for Personalized Federated Learning on Heterogeneous Neural Networks

1 code implementation25 Oct 2022 Jaehee Jang, Heonseok Ha, Dahuin Jung, Sungroh Yoon

While the existing methods require the collection of auxiliary data or model weights to generate a counterpart, FedClassAvg only requires clients to communicate with a couple of fully connected layers, which is highly communication-efficient.

Personalized Federated Learning Representation Learning +1

Confidence Score for Source-Free Unsupervised Domain Adaptation

1 code implementation14 Jun 2022 Jonghyun Lee, Dahuin Jung, Junho Yim, Sungroh Yoon

Unlike existing confidence scores that use only one of the source or target domain knowledge, the JMDS score uses both knowledge.

Unsupervised Domain Adaptation

Confidence Score Weighting Adaptation for Source-Free Unsupervised Domain Adaptation

no code implementations29 Sep 2021 Jonghyun Lee, Dahuin Jung, Junho Yim, Sungroh Yoon

Unsupervised domain adaptation (UDA) aims to achieve high performance within the unlabeled target domain by leveraging the labeled source domain.

Pseudo Label Unsupervised Domain Adaptation

Stein Latent Optimization for Generative Adversarial Networks

1 code implementation ICLR 2022 Uiwon Hwang, Heeseung Kim, Dahuin Jung, Hyemi Jang, Hyungyu Lee, Sungroh Yoon

Generative adversarial networks (GANs) with clustered latent spaces can perform conditional generation in a completely unsupervised manner.

Attribute

PixelSteganalysis: Pixel-wise Hidden Information Removal with Low Visual Degradation

no code implementations28 Feb 2019 Dahuin Jung, Ho Bae, Hyun-Soo Choi, Sungroh Yoon

We propose a DL based steganalysis technique that effectively removes secret images by restoring the distribution of the original images.

Steganalysis

HexaGAN: Generative Adversarial Nets for Real World Classification

1 code implementation26 Feb 2019 Uiwon Hwang, Dahuin Jung, Sungroh Yoon

We evaluate the classification performance (F1-score) of the proposed method with 20% missingness and confirm up to a 5% improvement in comparison with the performance of combinations of state-of-the-art methods.

Classification General Classification +2

AnomiGAN: Generative adversarial networks for anonymizing private medical data

no code implementations31 Jan 2019 Ho Bae, Dahuin Jung, Sungroh Yoon

We compared our method to state-of-the-art techniques and observed that our method preserves the same level of privacy as differential privacy (DP), but had better prediction results.

Security and Privacy Issues in Deep Learning

no code implementations31 Jul 2018 Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu Lee, Sungroh Yoon

Furthermore, the privacy of the data involved in model training is also threatened by attacks such as the model-inversion attack, or by dishonest service providers of AI applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.