Search Results for author: Axi Niu

Found 14 papers, 2 papers with code

Towards Understanding Dual BN In Hybrid Adversarial Training

no code implementations28 Mar 2024 Chenshuang Zhang, Chaoning Zhang, Kang Zhang, Axi Niu, Junmo Kim, In So Kweon

There is a growing concern about applying batch normalization (BN) in adversarial training (AT), especially when the model is trained on both adversarial samples and clean samples (termed Hybrid-AT).

DifAugGAN: A Practical Diffusion-style Data Augmentation for GAN-based Single Image Super-resolution

no code implementations30 Nov 2023 Axi Niu, Kang Zhang, Joshua Tian Jin Tee, Trung X. Pham, Jinqiu Sun, Chang D. Yoo, In So Kweon, Yanning Zhang

It is well known the adversarial optimization of GAN-based image super-resolution (SR) methods makes the preceding SR model generate unpleasant and undesirable artifacts, leading to large distortion.

Attribute Data Augmentation +1

Multiple Object Tracking based on Occlusion-Aware Embedding Consistency Learning

no code implementations5 Nov 2023 Yaoqi Hu, Axi Niu, Yu Zhu, Qingsen Yan, Jinqiu Sun, Yanning Zhang

The OPM predicts occlusion information for each true detection, facilitating the selection of valid samples for consistency learning of the track's visual embedding.

Multiple Object Tracking Object +1

ACDMSR: Accelerated Conditional Diffusion Models for Single Image Super-Resolution

no code implementations3 Jul 2023 Axi Niu, Pham Xuan Trung, Kang Zhang, Jinqiu Sun, Yu Zhu, In So Kweon, Yanning Zhang

To speed up inference and further enhance the performance, our research revisits diffusion models in image super-resolution and proposes a straightforward yet significant diffusion model-based super-resolution method called ACDMSR (accelerated conditional diffusion model for image super-resolution).

Denoising Image Super-Resolution +1

Learning from Multi-Perception Features for Real-Word Image Super-resolution

no code implementations26 May 2023 Axi Niu, Kang Zhang, Trung X. Pham, Pei Wang, Jinqiu Sun, In So Kweon, Yanning Zhang

Currently, there are two popular approaches for addressing real-world image super-resolution problems: degradation-estimation-based and blind-based methods.

Image Super-Resolution

GRAN: Ghost Residual Attention Network for Single Image Super Resolution

no code implementations28 Feb 2023 Axi Niu, Pei Wang, Yu Zhu, Jinqiu Sun, Qingsen Yan, Yanning Zhang

GRAB consists of the Ghost Module and Channel and Spatial Attention Module (CSAM) to alleviate the generation of redundant features.

Image Super-Resolution

Self-Supervised Visual Representation Learning via Residual Momentum

no code implementations17 Nov 2022 Trung X. Pham, Axi Niu, Zhang Kang, Sultan Rizky Madjid, Ji Woo Hong, Daehyeok Kim, Joshua Tian Jin Tee, Chang D. Yoo

To solve this problem, we propose "residual momentum" to directly reduce this gap to encourage the student to learn the representation as close to that of the teacher as possible, narrow the performance gap with the teacher, and significantly improve the existing SSL.

Contrastive Learning Representation Learning +1

On the Pros and Cons of Momentum Encoder in Self-Supervised Visual Representation Learning

no code implementations11 Aug 2022 Trung Pham, Chaoning Zhang, Axi Niu, Kang Zhang, Chang D. Yoo

Exponential Moving Average (EMA or momentum) is widely used in modern self-supervised learning (SSL) approaches, such as MoCo, for enhancing performance.

Representation Learning Self-Supervised Learning

Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness

2 code implementations22 Jul 2022 Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D. Yoo, In So Kweon

Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.

Adversarial Robustness Contrastive Learning +3

Dual Temperature Helps Contrastive Learning Without Many Negative Samples: Towards Understanding and Simplifying MoCo

2 code implementations CVPR 2022 Chaoning Zhang, Kang Zhang, Trung X. Pham, Axi Niu, Zhinan Qiao, Chang D. Yoo, In So Kweon

Contrastive learning (CL) is widely known to require many negative samples, 65536 in MoCo for instance, for which the performance of a dictionary-free framework is often inferior because the negative sample size (NSS) is limited by its mini-batch size (MBS).

Contrastive Learning

Fast Adversarial Training with Noise Augmentation: A Unified Perspective on RandStart and GradAlign

no code implementations11 Feb 2022 Axi Niu, Kang Zhang, Chaoning Zhang, Chenshuang Zhang, In So Kweon, Chang D. Yoo, Yanning Zhang

The former works only for a relatively small perturbation 8/255 with the l_\infty constraint, and GradAlign improves it by extending the perturbation size to 16/255 (with the l_\infty constraint) but at the cost of being 3 to 4 times slower.

Data Augmentation

Non-uniform Motion Deblurring with Blurry Component Divided Guidance

no code implementations15 Jan 2021 Pei Wang, Wei Sun, Qingsen Yan, Axi Niu, Rui Li, Yu Zhu, Jinqiu Sun, Yanning Zhang

To tackle the above problems, we present a deep two-branch network to deal with blurry images via a component divided module, which divides an image into two components based on the representation of blurry degree.

Blind Image Deblurring Image Deblurring +1

Cannot find the paper you are looking for? You can Submit a new open access paper.