Search Results for author: Trung X. Pham

Found 12 papers, 3 papers with code

Cross-view Masked Diffusion Transformers for Person Image Synthesis

no code implementations2 Feb 2024 Trung X. Pham, Zhang Kang, Chang D. Yoo

Our best model surpasses the pixel-based diffusion with $\frac{2}{3}$ of the parameters and achieves $5. 43 \times$ faster inference.

Denoising Image Generation +2

DifAugGAN: A Practical Diffusion-style Data Augmentation for GAN-based Single Image Super-resolution

no code implementations30 Nov 2023 Axi Niu, Kang Zhang, Joshua Tian Jin Tee, Trung X. Pham, Jinqiu Sun, Chang D. Yoo, In So Kweon, Yanning Zhang

It is well known the adversarial optimization of GAN-based image super-resolution (SR) methods makes the preceding SR model generate unpleasant and undesirable artifacts, leading to large distortion.

Attribute Data Augmentation +1

Learning from Multi-Perception Features for Real-Word Image Super-resolution

no code implementations26 May 2023 Axi Niu, Kang Zhang, Trung X. Pham, Pei Wang, Jinqiu Sun, In So Kweon, Yanning Zhang

Currently, there are two popular approaches for addressing real-world image super-resolution problems: degradation-estimation-based and blind-based methods.

Image Super-Resolution

Self-Supervised Visual Representation Learning via Residual Momentum

no code implementations17 Nov 2022 Trung X. Pham, Axi Niu, Zhang Kang, Sultan Rizky Madjid, Ji Woo Hong, Daehyeok Kim, Joshua Tian Jin Tee, Chang D. Yoo

To solve this problem, we propose "residual momentum" to directly reduce this gap to encourage the student to learn the representation as close to that of the teacher as possible, narrow the performance gap with the teacher, and significantly improve the existing SSL.

Contrastive Learning Representation Learning +1

Dual Temperature Helps Contrastive Learning Without Many Negative Samples: Towards Understanding and Simplifying MoCo

2 code implementations CVPR 2022 Chaoning Zhang, Kang Zhang, Trung X. Pham, Axi Niu, Zhinan Qiao, Chang D. Yoo, In So Kweon

Contrastive learning (CL) is widely known to require many negative samples, 65536 in MoCo for instance, for which the performance of a dictionary-free framework is often inferior because the negative sample size (NSS) is limited by its mini-batch size (MBS).

Contrastive Learning

Self-supervised Learning with Local Attention-Aware Feature

no code implementations1 Aug 2021 Trung X. Pham, Rusty John Lloyd Mina, Dias Issa, Chang D. Yoo

In this work, we propose a novel methodology for self-supervised learning for generating global and local attention-aware visual features.

Self-Supervised Learning

Cascade RPN: Delving into High-Quality Region Proposal Network with Adaptive Convolution

2 code implementations NeurIPS 2019 Thang Vu, Hyunjun Jang, Trung X. Pham, Chang D. Yoo

This paper considers an architecture referred to as Cascade Region Proposal Network (Cascade RPN) for improving the region-proposal quality and detection performance by \textit{systematically} addressing the limitation of the conventional RPN that \textit{heuristically defines} the anchors and \textit{aligns} the features to the anchors.

Object Detection Region Proposal

Fast and Efficient Image Quality Enhancement via Desubpixel Convolutional Neural Networks

1 code implementation ECCV2018 2018 Thang Vu, Cao V. Nguyen, Trung X. Pham, Tung M. Luu, Chang D. Yoo

This paper considers a convolutional neural network for image quality enhancement referred to as the fast and efficient quality enhancement (FEQE) that can be trained for either image super-resolution or image enhancement to provide accurate yet visually pleasing images on mobile devices by addressing the following three main issues.

Image Enhancement Image Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.