no code implementations • 6 Dec 2023 • Junhyuk So, Jungwon Lee, Eunhyeok Park
The substantial computational costs of diffusion models, especially due to the repeated denoising steps necessary for high-quality image generation, present a major obstacle to their widespread adoption.
no code implementations • NeurIPS 2023 • Junhyuk So, Jungwon Lee, Daehyun Ahn, HyungJun Kim, Eunhyeok Park
The diffusion model has gained popularity in vision applications due to its remarkable generative performance and versatility.
1 code implementation • ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2022 • Changdae Oh, Heeji Won, Junhyuk So, Taero Kim, Yewon Kim, Hosik Choi, Kyungwoo Song
We provide a new type of contrastive loss motivated by Gaussian and Student-t kernels for distributional contrastive learning with theoretical analysis.
1 code implementation • CVPR 2023 • JunCheol Shin, Junhyuk So, Sein Park, Seungyeop Kang, Sungjoo Yoo, Eunhyeok Park
Recently, pseudoquantization training has been proposed as an alternative approach to updating the learnable parameters using the pseudo-quantization noise instead of STE.
1 code implementation • NeurIPS 2023 • Changdae Oh, Junhyuk So, Hoyoon Byun, Yongtaek Lim, Minchul Shin, Jong-June Jeon, Kyungwoo Song
Such a lack of alignment and uniformity might restrict the transferability and robustness of embeddings.