1 code implementation • 18 Apr 2024 • Dongheon Lee, Seokju Yun, Youngmin Ro
As a result, we introduce Partial Large Kernel CNNs for Efficient Super-Resolution (PLKSR), which achieves state-of-the-art performance on four datasets at a scale of $\times$4, with reductions of 68. 1\% in latency and 80. 2\% in maximum GPU memory occupancy compared to SRFormer-light.
1 code implementation • 29 Jan 2024 • Seokju Yun, Youngmin Ro
For object detection and instance segmentation on MS COCO using Mask-RCNN head, our model achieves performance comparable to FastViT-SA12 while exhibiting 3. 8x and 2. 0x lower backbone latency on GPU and mobile device, respectively.
no code implementations • 29 Jan 2024 • Dongheon Lee, Seungmyong Jeong, Youngmin Ro
Numerical models have long been used to understand geoscientific phenomena, including tidal currents, crucial for renewable energy production and coastal engineering.
1 code implementation • 13 Apr 2023 • Seokju Yun, Youngmin Ro
We introduce Dynamic Mobile-Former(DMF), maximizes the capabilities of dynamic convolution by harmonizing it with efficient operators. Our Dynamic MobileFormer effectively utilizes the advantages of Dynamic MobileNet (MobileNet equipped with dynamic convolution) using global information from light-weight attention. A Transformer in Dynamic Mobile-Former only requires a few randomly initialized tokens to calculate global features, making it computationally efficient. And a bridge between Dynamic MobileNet and Transformer allows for bidirectional integration of local and global features. We also simplify the optimization process of vanilla dynamic convolution by splitting the convolution kernel into an input-agnostic kernel and an input-dependent kernel. This allows for optimization in a wider kernel space, resulting in enhanced capacity. By integrating lightweight attention and enhanced dynamic convolution, our Dynamic Mobile-Former achieves not only high efficiency, but also strong performance. We benchmark the Dynamic Mobile-Former on a series of vision tasks, and showcase that it achieves impressive performance on image classification, COCO detection, and instanace segmentation. For example, our DMF hits the top-1 accuracy of 79. 4% on ImageNet-1K, much higher than PVT-Tiny by 4. 3% with only 1/4 FLOPs. Additionally, our proposed DMF-S model performed well on challenging vision datasets such as COCO, achieving a 39. 0% mAP, which is 1% higher than that of the Mobile-Former 508M model, despite using 3 GFLOPs less computations. Code and models are available at https://github. com/ysj9909/DMF
no code implementations • 7 Feb 2022 • Yonghyun Jeong, Doyeon Kim, Youngmin Ro, Jongwon Choi
For experiments, we design new test scenarios varying from the training settings in GAN models, color manipulations, and object categories.
no code implementations • 12 Nov 2021 • Yonghyun Jeong, Doyeon Kim, Pyounggeon Kim, Youngmin Ro, Jongwon Choi
Although the recent advancement in generative models brings diverse advantages to society, it can also be abused with malicious purposes, such as fraud, defamation, and fake news.
no code implementations • 2 Oct 2021 • Yonghyun Jeong, Jooyoung Choi, Sungwon Kim, Youngmin Ro, Tae-Hyun Oh, Doyeon Kim, Heonseok Ha, Sungroh Yoon
In this work, we present Facial Identity Controllable GAN (FICGAN) for not only generating high-quality de-identified face images with ensured privacy protection, but also detailed controllability on attribute preservation for enhanced data utility.
1 code implementation • 14 Feb 2020 • Youngmin Ro, Jin Young Choi
Existing fine-tuning methods use a single learning rate over all layers.
1 code implementation • 18 Jan 2019 • Youngmin Ro, Jongwon Choi, Dae Ung Jo, Byeongho Heo, Jongin Lim, Jin Young Choi
Our strategy alleviates the problem of gradient vanishing in low-level layers and robustly trains the low-level layers to fit the ReID dataset, thereby increasing the performance of ReID tasks.