Search Results for author: DongGyu Lee

Found 7 papers, 2 papers with code

PointT2I: LLM-based text-to-image generation via keypoints

no code implementations2 Jun 2025 Taekyung Lee, DongGyu Lee, Myungjoo Kang

The keypoint generation uses an LLM to directly generate keypoints corresponding to a human pose, solely based on the input prompt, without external references.

Large Language Model Text to Image Generation +1

DEAL: Decoupled Classifier with Adaptive Linear Modulation for Group Robust Early Diagnosis of MCI to AD Conversion

no code implementations16 Nov 2024 DongGyu Lee, Juhyeon Park, Taesup Moon

While deep learning-based Alzheimer's disease (AD) diagnosis has recently made significant advancements, particularly in predicting the conversion of mild cognitive impairment (MCI) to AD based on MRI images, there remains a critical gap in research regarding the group robustness of the diagnosis.

Spatial-and-Frequency-aware Restoration method for Images based on Diffusion Models

no code implementations31 Jan 2024 Kyungsung Lee, DongGyu Lee, Myungjoo Kang

We comprehensively evaluate the performance of our model on a variety of noisy inverse problems, including inpainting, denoising, and super-resolution.

Denoising Image Restoration +1

SwiFT: Swin 4D fMRI Transformer

1 code implementation NeurIPS 2023 Peter Yongho Kim, Junbeom Kwon, Sunghwan Joo, Sangyoon Bae, DongGyu Lee, Yoonho Jung, Shinjae Yoo, Jiook Cha, Taesup Moon

To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner.

Continual Learning in the Presence of Spurious Correlation

no code implementations21 Mar 2023 DongGyu Lee, Sangwon Jung, Taesup Moon

Specifically, we first show through two-task CL experiments that standard CL methods, which are unaware of dataset bias, can transfer biases from one task to another, both forward and backward, and this transfer is exacerbated depending on whether the CL methods focus on the stability or the plasticity.

Continual Learning Transfer Learning

Fair Feature Distillation for Visual Recognition

no code implementations CVPR 2021 Sangwon Jung, DongGyu Lee, TaeEon Park, Taesup Moon

Fairness is becoming an increasingly crucial issue for computer vision, especially in the human-related decision systems.

Fairness Knowledge Distillation

Uncertainty-based Continual Learning with Adaptive Regularization

2 code implementations NeurIPS 2019 Hongjoon Ahn, Sungmin Cha, DongGyu Lee, Taesup Moon

We introduce a new neural network-based continual learning algorithm, dubbed as Uncertainty-regularized Continual Learning (UCL), which builds on traditional Bayesian online learning framework with variational inference.

Continual Learning Reinforcement Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.