Search Results for author: Jiho Jang

Found 7 papers, 3 papers with code

ConcatPlexer: Additional Dim1 Batching for Faster ViTs

no code implementations22 Aug 2023 Donghoon Han, Seunghyeon Seo, Donghyeon Jeon, Jiho Jang, Chaerin Kong, Nojun Kwak

Transformers have demonstrated tremendous success not only in the natural language processing (NLP) domain but also the field of computer vision, igniting various creative approaches and applications.

Robust Multi-bit Natural Language Watermarking through Invariant Features

1 code implementation3 May 2023 KiYoon Yoo, Wonhyuk Ahn, Jiho Jang, Nojun Kwak

Recent years have witnessed a proliferation of valuable original natural language contents found in subscription-based media outlets, web novel platforms, and outputs of large language models.

Unifying Vision-Language Representation Space with Single-tower Transformer

no code implementations21 Nov 2022 Jiho Jang, Chaerin Kong, Donghyeon Jeon, Seonhoon Kim, Nojun Kwak

Contrastive learning is a form of distance learning that aims to learn invariant features from two related representations.

Contrastive Learning Object Localization +3

Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation

no code implementations3 Mar 2022 KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak

Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years.

Adversarial Defense Density Estimation +3

Self-Distilled Self-Supervised Representation Learning

1 code implementation25 Nov 2021 Jiho Jang, Seonhoon Kim, KiYoon Yoo, Chaerin Kong, Jangho Kim, Nojun Kwak

Through self-distillation, the intermediate layers are better suited for instance discrimination, making the performance of an early-exited sub-network not much degraded from that of the full network.

Representation Learning Self-Supervised Learning

Self-Evolutionary Optimization for Pareto Front Learning

no code implementations7 Oct 2021 Simyung Chang, KiYoon Yoo, Jiho Jang, Nojun Kwak

Utilizing SEO for PFL, we also introduce self-evolutionary Pareto networks (SEPNet), enabling the unified model to approximate the entire Pareto front set that maximizes the hypervolume.

Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.