Search Results for author: Youngmin Ro

Found 9 papers, 5 papers with code

Partial Large Kernel CNNs for Efficient Super-Resolution

1 code implementation18 Apr 2024 Dongheon Lee, Seokju Yun, Youngmin Ro

As a result, we introduce Partial Large Kernel CNNs for Efficient Super-Resolution (PLKSR), which achieves state-of-the-art performance on four datasets at a scale of $\times$4, with reductions of 68. 1\% in latency and 80. 2\% in maximum GPU memory occupancy compared to SRFormer-light.

Image Super-Resolution

SHViT: Single-Head Vision Transformer with Memory Efficient Macro Design

1 code implementation29 Jan 2024 Seokju Yun, Youngmin Ro

For object detection and instance segmentation on MS COCO using Mask-RCNN head, our model achieves performance comparable to FastViT-SA12 while exhibiting 3. 8x and 2. 0x lower backbone latency on GPU and mobile device, respectively.

Image Classification Instance Segmentation +2

Arbitrary-Scale Downscaling of Tidal Current Data Using Implicit Continuous Representation

no code implementations29 Jan 2024 Dongheon Lee, Seungmyong Jeong, Youngmin Ro

Numerical models have long been used to understand geoscientific phenomena, including tidal currents, crucial for renewable energy production and coastal engineering.

Dynamic Mobile-Former: Strengthening Dynamic Convolution with Attention and Residual Connection in Kernel Space

1 code implementation13 Apr 2023 Seokju Yun, Youngmin Ro

We introduce Dynamic Mobile-Former(DMF), maximizes the capabilities of dynamic convolution by harmonizing it with efficient operators. Our Dynamic MobileFormer effectively utilizes the advantages of Dynamic MobileNet (MobileNet equipped with dynamic convolution) using global information from light-weight attention. A Transformer in Dynamic Mobile-Former only requires a few randomly initialized tokens to calculate global features, making it computationally efficient. And a bridge between Dynamic MobileNet and Transformer allows for bidirectional integration of local and global features. We also simplify the optimization process of vanilla dynamic convolution by splitting the convolution kernel into an input-agnostic kernel and an input-dependent kernel. This allows for optimization in a wider kernel space, resulting in enhanced capacity. By integrating lightweight attention and enhanced dynamic convolution, our Dynamic Mobile-Former achieves not only high efficiency, but also strong performance. We benchmark the Dynamic Mobile-Former on a series of vision tasks, and showcase that it achieves impressive performance on image classification, COCO detection, and instanace segmentation. For example, our DMF hits the top-1 accuracy of 79. 4% on ImageNet-1K, much higher than PVT-Tiny by 4. 3% with only 1/4 FLOPs. Additionally, our proposed DMF-S model performed well on challenging vision datasets such as COCO, achieving a 39. 0% mAP, which is 1% higher than that of the Mobile-Former 508M model, despite using 3 GFLOPs less computations. Code and models are available at https://github. com/ysj9909/DMF

Image Classification

FrePGAN: Robust Deepfake Detection Using Frequency-level Perturbations

no code implementations7 Feb 2022 Yonghyun Jeong, Doyeon Kim, Youngmin Ro, Jongwon Choi

For experiments, we design new test scenarios varying from the training settings in GAN models, color manipulations, and object categories.

DeepFake Detection Face Swapping

Self-supervised GAN Detector

no code implementations12 Nov 2021 Yonghyun Jeong, Doyeon Kim, Pyounggeon Kim, Youngmin Ro, Jongwon Choi

Although the recent advancement in generative models brings diverse advantages to society, it can also be abused with malicious purposes, such as fraud, defamation, and fake news.

FICGAN: Facial Identity Controllable GAN for De-identification

no code implementations2 Oct 2021 Yonghyun Jeong, Jooyoung Choi, Sungwon Kim, Youngmin Ro, Tae-Hyun Oh, Doyeon Kim, Heonseok Ha, Sungroh Yoon

In this work, we present Facial Identity Controllable GAN (FICGAN) for not only generating high-quality de-identified face images with ensured privacy protection, but also detailed controllability on attribute preservation for enhanced data utility.

Attribute De-identification

Backbone Can Not be Trained at Once: Rolling Back to Pre-trained Network for Person Re-Identification

1 code implementation18 Jan 2019 Youngmin Ro, Jongwon Choi, Dae Ung Jo, Byeongho Heo, Jongin Lim, Jin Young Choi

Our strategy alleviates the problem of gradient vanishing in low-level layers and robustly trains the low-level layers to fit the ReID dataset, thereby increasing the performance of ReID tasks.

Person Re-Identification Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.