Search Results for author: Sumyeong Ahn

Found 9 papers, 3 papers with code

Large Language Models in Medical Term Classification and Unexpected Misalignment Between Response and Reasoning

no code implementations19 Dec 2023 Xiaodan Zhang, Sandeep Vemulapalli, Nabasmita Talukdar, Sumyeong Ahn, Jiankun Wang, Han Meng, Sardar Mehtab Bin Murtaza, Aakash Ajay Dave, Dmitry Leshchiner, Dimitri F. Joseph, Martin Witteveen-Lane, Dave Chesla, Jiayu Zhou, Bin Chen

This study assesses the ability of state-of-the-art large language models (LLMs) including GPT-3. 5, GPT-4, Falcon, and LLaMA 2 to identify patients with mild cognitive impairment (MCI) from discharge summaries and examines instances where the models' responses were misaligned with their reasoning.

Decision Making Prompt Engineering

Active Prompt Learning in Vision Language Models

no code implementations18 Nov 2023 Jihwan Bang, Sumyeong Ahn, Jae-Gil Lee

In response to this inquiry, we observe that (1) simply applying a conventional active learning framework to pre-trained VLMs even may degrade performance compared to random selection because of the class imbalance in labeling candidates, and (2) the knowledge of VLMs can provide hints for achieving the balance before labeling.

Active Learning

Fine tuning Pre trained Models for Robustness Under Noisy Labels

no code implementations24 Oct 2023 Sumyeong Ahn, Sihyeon Kim, Jongwoo Ko, Se-Young Yun

To tackle this issue, researchers have explored methods for Learning with Noisy Labels to identify clean samples and reduce the influence of noisy labels.

Denoising Learning with noisy labels

NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models

1 code implementation16 Oct 2023 Jongwoo Ko, Seungjoon Park, Yujin Kim, Sumyeong Ahn, Du-Seong Chang, Euijai Ahn, Se-Young Yun

Structured pruning methods have proven effective in reducing the model size and accelerating inference speed in various network architectures such as Transformers.

CUDA: Curriculum of Data Augmentation for Long-Tailed Recognition

1 code implementation10 Feb 2023 Sumyeong Ahn, Jongwoo Ko, Se-Young Yun

To handle this restriction, several methods have been developed that increase the representations of minority samples by leveraging the features of the majority samples.

Data Augmentation Long-tail Learning

Denoising after Entropy-based Debiasing A Robust Training Method for Dataset Bias with Noisy Labels

no code implementations1 Dec 2022 Sumyeong Ahn, Se-Young Yun

Furthermore, we find that running denoising algorithms before debiasing is ineffective because denoising algorithms reduce the impact of difficult-to-learn samples, including valuable bias-conflicting samples.

Denoising

Mitigating Dataset Bias by Using Per-sample Gradient

1 code implementation31 May 2022 Sumyeong Ahn, Seongyoon Kim, Se-Young Yun

In this study, we propose a debiasing algorithm, called PGD (Per-sample Gradient-based Debiasing), that comprises three steps: (1) training a model on uniform batch sampling, (2) setting the importance of each sample in proportion to the norm of the sample gradient, and (3) training the model using importance-batch sampling, whose probability is obtained in step (2).

Attribute

Mitigating Dataset Bias Using Per-Sample Gradients From A Biased Classifier

no code implementations29 Sep 2021 Sumyeong Ahn, Se-Young Yun

The performance of deep neural networks (DNNs) primarily depends on the configuration of the training set.

Enlarging Discriminative Power by Adding an Extra Class in Unsupervised Domain Adaptation

no code implementations19 Feb 2020 Hai H. Tran, Sumyeong Ahn, Taeyoung Lee, Yung Yi

In this paper, we propose an idea of empowering the discriminativeness: Adding a new, artificial class and training the model on the data together with the GAN-generated samples of the new class.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.