no code implementations • 21 Mar 2024 • Jinyung Hong, Eun Som Jeon, Changhoon Kim, Keun Hee Park, Utkarsh Nath, Yezhou Yang, Pavan Turaga, Theodore P. Pavlic
Biased attributes, spuriously correlated with target labels in a dataset, can problematically lead to neural networks that learn improper shortcuts for classifications and limit their capabilities for out-of-distribution (OOD) generalization.
no code implementations • 7 Dec 2023 • Maitreya Patel, Changhoon Kim, Sheng Cheng, Chitta Baral, Yezhou Yang
The T2I prior model alone adds a billion parameters compared to the Latent Diffusion Models, which increases the computational and high-quality data requirements.
no code implementations • 7 Jun 2023 • Changhoon Kim, Kyle Min, Maitreya Patel, Sheng Cheng, Yezhou Yang
The rapid advancement of generative models, facilitating the creation of hyper-realistic images from textual descriptions, has concurrently escalated critical societal concerns such as misinformation.
1 code implementation • 17 Apr 2023 • GuangYu Nie, Changhoon Kim, Yezhou Yang, Yi Ren
This paper investigates the use of latent semantic dimensions as fingerprints, from where we can analyze the effects of design variables, including the choice of fingerprinting dimensions, strength, and capacity, on the accuracy-quality tradeoff.
no code implementations • ICLR 2021 • Changhoon Kim, Yi Ren, Yezhou Yang
Growing applications of generative models have led to new threats such as malicious personation and digital copyright infringement.
2 code implementations • 22 Feb 2019 • Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan R. K. Ports, Peter Richtárik
Training machine learning models in parallel is an increasingly important workload.