no code implementations • RepL4NLP (ACL) 2022 • Changwook Jun, Hansol Jang, Myoseop Sim, Hyun Kim, Jooyoung Choi, Kyungkoo Min, Kyunghoon Bae
Pre-trained language models have brought significant improvements in performance in a variety of natural language processing tasks.
no code implementations • 25 Apr 2024 • Changho Lee, Janghoon Han, Seonghyeon Ye, Stanley Jungkyu Choi, Honglak Lee, Kyunghoon Bae
Instruction tuning has shown its ability to not only enhance zero-shot generalization across various tasks but also its effectiveness in improving the performance of specific tasks.
no code implementations • 13 Mar 2024 • Yao Fu, Dong-Ki Kim, Jaekyeom Kim, Sungryull Sohn, Lajanugen Logeswaran, Kyunghoon Bae, Honglak Lee
The primary limitation of large language models (LLMs) is their restricted understanding of the world.
1 code implementation • 26 May 2023 • Jeeho Hyun, Sangyun Kim, Giyoung Jeon, Seung Hwan Kim, Kyunghoon Bae, Byung Jun Kang
In this paper, we introduce ReConPatch, which constructs discriminative features for anomaly detection by training a linear modulation of patch features extracted from the pre-trained model.
Ranked #1 on Anomaly Detection on MVTec AD
no code implementations • 14 Dec 2022 • Jongseong Jang, Daeun Kyung, Seung Hwan Kim, Honglak Lee, Kyunghoon Bae, Edward Choi
However, large-scale and high-quality data to train powerful neural networks are rare in the medical domain as the labeling must be done by qualified experts.
no code implementations • 28 Mar 2022 • Changwook Jun, Hansol Jang, Myoseop Sim, Hyun Kim, Jooyoung Choi, Kyungkoo Min, Kyunghoon Bae
Pre-trained language models have brought significant improvements in performance in a variety of natural language processing tasks.
1 code implementation • CVPR 2022 • TaeHoon Kim, Gwangmo Song, Sihaeng Lee, Sangyun Kim, Yewon Seo, Soonyoung Lee, Seung Hwan Kim, Honglak Lee, Kyunghoon Bae
Unlike other models, BiART can distinguish between image (or text) as a conditional reference and a generation target.
Ranked #1 on Image Reconstruction on ImageNet 256x256
no code implementations • 1 Oct 2020 • Sam Sattarzadeh, Mahesh Sudhakar, Anthony Lem, Shervin Mehryar, K. N. Plataniotis, Jongseong Jang, Hyunwoo Kim, Yeonjeong Jeong, Sangmin Lee, Kyunghoon Bae
In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation.