no code implementations • 14 Feb 2023 • Axi Niu, Kang Zhang, Trung X. Pham, Jinqiu Sun, Yu Zhu, In So Kweon, Yanning Zhang
Diffusion probabilistic models (DPM) have been widely adopted in image-to-image translation to generate high-quality images.
no code implementations • 14 Aug 2022 • Zhiliang Wu, Hanyu Xuan, Changchang Sun, Kang Zhang, Yan Yan
Specifically, in this work, we propose an end-to-end trainable framework consisting of completion network and mask prediction network, which are designed to generate corrupted contents of the current frame using the known mask and decide the regions to be filled of the next frame, respectively.
no code implementations • 11 Aug 2022 • Trung Pham, Chaoning Zhang, Axi Niu, Kang Zhang, Chang D. Yoo
Exponential Moving Average (EMA or momentum) is widely used in modern self-supervised learning (SSL) approaches, such as MoCo, for enhancing performance.
no code implementations • 30 Jul 2022 • Chaoning Zhang, Chenshuang Zhang, Junha Song, John Seon Keun Yi, Kang Zhang, In So Kweon
Masked autoencoders are scalable vision learners, as the title of MAE \cite{he2022masked}, which suggests that self-supervised learning (SSL) in vision might undertake a similar trajectory as in NLP.
2 code implementations • 22 Jul 2022 • Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D. Yoo, In So Kweon
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.
1 code implementation • 5 Jul 2022 • Agus Gunawan, Xu Yin, Kang Zhang
Various normalization layers have been proposed to help the training of neural networks.
2 code implementations • CVPR 2022 • Chaoning Zhang, Kang Zhang, Trung X. Pham, Axi Niu, Zhinan Qiao, Chang D. Yoo, In So Kweon
Contrastive learning (CL) is widely known to require many negative samples, 65536 in MoCo for instance, for which the performance of a dictionary-free framework is often inferior because the negative sample size (NSS) is limited by its mini-batch size (MBS).
no code implementations • 30 Mar 2022 • Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon
It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.
no code implementations • 30 Mar 2022 • Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Trung X. Pham, Chang D. Yoo, In So Kweon
This yields a unified perspective on how negative samples and SimSiam alleviate collapse.
no code implementations • 11 Feb 2022 • Axi Niu, Kang Zhang, Chaoning Zhang, Chenshuang Zhang, In So Kweon, Chang D. Yoo, Yanning Zhang
The former works only for a relatively small perturbation 8/255 with the l_\infty constraint, and GradAlign improves it by extending the perturbation size to 16/255 (with the l_\infty constraint) but at the cost of being 3 to 4 times slower.
no code implementations • CVPR 2022 • Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon
It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.
no code implementations • ICLR 2022 • Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Trung X. Pham, Chang D. Yoo, In So Kweon
Towards avoiding collapse in self-supervised learning (SSL), contrastive loss is widely used but often requires a large number of negative samples.
no code implementations • 29 Sep 2021 • Chaoning Zhang, Gyusang Cho, Philipp Benz, Kang Zhang, Chenshuang Zhang, Chan-Hyun Youn, In So Kweon
The transferability of adversarial examples (AE); known as adversarial transferability, has attracted significant attention because it can be exploited for TransferableBlack-box Attacks (TBA).
no code implementations • 1 Mar 2021 • Yiming Qiu, Kang Zhang, Han Zhang, Songlin Wang, Sulong Xu, Yun Xiao, Bo Long, Wen-Yun Yang
Online A/B experiments show that it improves core e-commerce business metrics significantly.
no code implementations • 3 Jun 2020 • Han Zhang, Songlin Wang, Kang Zhang, Zhiling Tang, Yunjiang Jiang, Yun Xiao, Weipeng Yan, Wen-Yun Yang
Two critical challenges stay in today's e-commerce search: how to retrieve items that are semantically relevant but not exact matching to query terms, and how to retrieve items that are more personalized to different users for the same search query.
no code implementations • 13 Sep 2017 • Jinhui Yu, Kailin Wu, Kang Zhang, Xianjun Sam Zheng
The colors of negative afterimages differ from the old stimulating colors in the original image when the color in the new area is either neutral or chromatic.
no code implementations • ICCV 2015 • Kang Zhang, Wuyi Yu, Mary Manhein, Warren Waggenspack, Xin Li
This paper studies matching of fragmented objects to recompose their original geometry.
15 code implementations • 6 Oct 2014 • Devavrat Shah, Kang Zhang
In this paper, we discuss the method of Bayesian regression and its efficacy for predicting price variation of Bitcoin, a recently popularized virtual, cryptographic currency.
1 code implementation • CVPR 2014 • Kang Zhang, Yuqiang Fang, Dongbo Min, Lifeng Sun, Shiqiang Yang. Shuicheng Yan, Qi Tian
We firstly reformulate cost aggregation from a unified optimization perspective and show that different cost aggregation methods essentially differ in the choices of similarity kernels.
1 code implementation • 10 Feb 2014 • Kang Zhang, Jiyang Li, Yijing Li, Weidong Hu, Lifeng Sun, Shiqiang Yang
In this paper, we propose a novel binary-based cost computation and aggregation approach for stereo matching problem.