1 code implementation • 8 Mar 2023 • Yifei Wang, Qi Zhang, Tianqi Du, Jiansheng Yang, Zhouchen Lin, Yisen Wang
In recent years, contrastive learning achieves impressive results on self-supervised visual representation learning, but there still lacks a rigorous understanding of its learning dynamics.
1 code implementation • 29 Jun 2022 • Qi Chen, Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Moreover, we show that the optimization-induced variants of our models can boost the performance and improve training stability and efficiency as well.
1 code implementation • 25 Mar 2022 • Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Our theory suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and the overlapped augmented views (i. e., the chaos) create a ladder for contrastive learning to gradually learn class-separated representations.
no code implementations • ICLR 2022 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
On the other hand, our unified framework can be extended to the unsupervised scenario, which interprets unsupervised contrastive learning as an important sampling of CEM.
no code implementations • NeurIPS 2021 • Yifei Wang, Zhengyang Geng, Feng Jiang, Chuming Li, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the choice of data augmentation.
no code implementations • ICLR 2022 • Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Our work suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and it is the overlapping augmented views (i. e., the chaos) that create a ladder for contrastive learning to gradually learn class-separated representations.
1 code implementation • 1 Jul 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).
no code implementations • ICML Workshop AML 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Based on these, we propose principled adversarial sampling algorithms in both supervised and unsupervised scenarios.
1 code implementation • NeurIPS 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Graph Convolutional Networks (GCNs) have attracted more and more attentions in recent years.
no code implementations • 1 Jan 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).
no code implementations • 2 Jul 2020 • Yifei Wang, Dan Peng, Furui Liu, Zhenguo Li, Zhitang Chen, Jiansheng Yang
Adversarial Training (AT) is proposed to alleviate the adversarial vulnerability of machine learning models by extracting only robust features from the input, which, however, inevitably leads to severe accuracy reduction as it discards the non-robust yet useful features.