no code implementations • 19 Dec 2023 • Xiaodan Zhang, Sandeep Vemulapalli, Nabasmita Talukdar, Sumyeong Ahn, Jiankun Wang, Han Meng, Sardar Mehtab Bin Murtaza, Aakash Ajay Dave, Dmitry Leshchiner, Dimitri F. Joseph, Martin Witteveen-Lane, Dave Chesla, Jiayu Zhou, Bin Chen
This study assesses the ability of state-of-the-art large language models (LLMs) including GPT-3. 5, GPT-4, Falcon, and LLaMA 2 to identify patients with mild cognitive impairment (MCI) from discharge summaries and examines instances where the models' responses were misaligned with their reasoning.
no code implementations • 18 Nov 2023 • Jihwan Bang, Sumyeong Ahn, Jae-Gil Lee
In response to this inquiry, we observe that (1) simply applying a conventional active learning framework to pre-trained VLMs even may degrade performance compared to random selection because of the class imbalance in labeling candidates, and (2) the knowledge of VLMs can provide hints for achieving the balance before labeling.
no code implementations • 24 Oct 2023 • Sumyeong Ahn, Sihyeon Kim, Jongwoo Ko, Se-Young Yun
To tackle this issue, researchers have explored methods for Learning with Noisy Labels to identify clean samples and reduce the influence of noisy labels.
1 code implementation • 16 Oct 2023 • Jongwoo Ko, Seungjoon Park, Yujin Kim, Sumyeong Ahn, Du-Seong Chang, Euijai Ahn, Se-Young Yun
Structured pruning methods have proven effective in reducing the model size and accelerating inference speed in various network architectures such as Transformers.
1 code implementation • 10 Feb 2023 • Sumyeong Ahn, Jongwoo Ko, Se-Young Yun
To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples.
Ranked #13 on Long-tail Learning on CIFAR-100-LT (ρ=10)
no code implementations • 1 Dec 2022 • Sumyeong Ahn, Se-Young Yun
Furthermore, we find that running denoising algorithms before debiasing is ineffective because denoising algorithms reduce the impact of difficult-to-learn samples, including valuable bias-conflicting samples.
1 code implementation • 31 May 2022 • Sumyeong Ahn, Seongyoon Kim, Se-Young Yun
In this study, we propose a debiasing algorithm, called PGD (Per-sample Gradient-based Debiasing), that comprises three steps: (1) training a model on uniform batch sampling, (2) setting the importance of each sample in proportion to the norm of the sample gradient, and (3) training the model using importance-batch sampling, whose probability is obtained in step (2).
no code implementations • 29 Sep 2021 • Sumyeong Ahn, Se-Young Yun
The performance of deep neural networks (DNNs) primarily depends on the configuration of the training set.
no code implementations • 19 Feb 2020 • Hai H. Tran, Sumyeong Ahn, Taeyoung Lee, Yung Yi
In this paper, we propose an idea of empowering the discriminativeness: Adding a new, artificial class and training the model on the data together with the GAN-generated samples of the new class.