Search Results for author: Noah Lee

Found 5 papers, 4 papers with code

Margin-aware Preference Optimization for Aligning Diffusion Models without Reference

no code implementations10 Jun 2024 Jiwoo Hong, Sayak Paul, Noah Lee, Kashif Rasul, James Thorne, Jongheon Jeong

In this paper, we focus on the alignment of recent text-to-image diffusion models, such as Stable Diffusion XL (SDXL), and find that this "reference mismatch" is indeed a significant problem in aligning these models due to the unstructured nature of visual modalities: e. g., a preference for a particular stylistic aspect can easily induce such a discrepancy.

ORPO: Monolithic Preference Optimization without Reference Model

4 code implementations12 Mar 2024 Jiwoo Hong, Noah Lee, James Thorne

While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence.

Robust Fine-Tuning of Vision-Language Models for Domain Generalization

1 code implementation3 Nov 2023 Kevin Vogt-Lowell, Noah Lee, Theodoros Tsiligkaridis, Marc Vaillant

To address these gaps, we present a new recipe for few-shot fine-tuning of the popular vision-language foundation model CLIP and evaluate its performance on challenging benchmark datasets with realistic distribution shifts from the WILDS collection.

Domain Generalization Few-Shot Learning +1

Can Large Language Models Capture Dissenting Human Voices?

1 code implementation23 May 2023 Noah Lee, Na Min An, James Thorne

Large language models (LLMs) have shown impressive achievements in solving a broad range of tasks.

Natural Language Inference Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.