1 code implementation • Findings (ACL) 2022 • KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak
Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years.
no code implementations • 22 Aug 2023 • Donghoon Han, Seunghyeon Seo, Donghyeon Jeon, Jiho Jang, Chaerin Kong, Nojun Kwak
Transformers have demonstrated tremendous success not only in the natural language processing (NLP) domain but also the field of computer vision, igniting various creative approaches and applications.
1 code implementation • 3 May 2023 • KiYoon Yoo, Wonhyuk Ahn, Jiho Jang, Nojun Kwak
Recent years have witnessed a proliferation of valuable original natural language contents found in subscription-based media outlets, web novel platforms, and outputs of large language models.
no code implementations • 21 Nov 2022 • Jiho Jang, Chaerin Kong, Donghyeon Jeon, Seonhoon Kim, Nojun Kwak
Contrastive learning is a form of distance learning that aims to learn invariant features from two related representations.
no code implementations • 3 Mar 2022 • KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak
Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years.
1 code implementation • 25 Nov 2021 • Jiho Jang, Seonhoon Kim, KiYoon Yoo, Chaerin Kong, Jangho Kim, Nojun Kwak
Through self-distillation, the intermediate layers are better suited for instance discrimination, making the performance of an early-exited sub-network not much degraded from that of the full network.
no code implementations • 7 Oct 2021 • Simyung Chang, KiYoon Yoo, Jiho Jang, Nojun Kwak
Utilizing SEO for PFL, we also introduce self-evolutionary Pareto networks (SEPNet), enabling the unified model to approximate the entire Pareto front set that maximizes the hypervolume.