Search Results for author: Jiansheng Yang

Found 11 papers, 5 papers with code

A Message Passing Perspective on Learning Dynamics of Contrastive Learning

1 code implementation8 Mar 2023 Yifei Wang, Qi Zhang, Tianqi Du, Jiansheng Yang, Zhouchen Lin, Yisen Wang

In recent years, contrastive learning achieves impressive results on self-supervised visual representation learning, but there still lacks a rigorous understanding of its learning dynamics.

Contrastive Learning Graph Attention +1

Optimization-Induced Graph Implicit Nonlinear Diffusion

1 code implementation29 Jun 2022 Qi Chen, Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Moreover, we show that the optimization-induced variants of our models can boost the performance and improve training stability and efficiency as well.

Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap

1 code implementation25 Mar 2022 Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Our theory suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and the overlapped augmented views (i. e., the chaos) create a ladder for contrastive learning to gradually learn class-separated representations.

Contrastive Learning Model Selection +1

A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training

no code implementations ICLR 2022 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

On the other hand, our unified framework can be extended to the unsupervised scenario, which interprets unsupervised contrastive learning as an important sampling of CEM.

Contrastive Learning

Residual Relaxation for Multi-view Representation Learning

no code implementations NeurIPS 2021 Yifei Wang, Zhengyang Geng, Feng Jiang, Chuming Li, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the choice of data augmentation.

Data Augmentation Representation Learning

Chaos is a Ladder: A New Understanding of Contrastive Learning

no code implementations ICLR 2022 Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Our work suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and it is the overlapping augmented views (i. e., the chaos) that create a ladder for contrastive learning to gradually learn class-separated representations.

Contrastive Learning Self-Supervised Learning

Reparameterized Sampling for Generative Adversarial Networks

1 code implementation1 Jul 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).

Demystifying Adversarial Training via A Unified Probabilistic Framework

no code implementations ICML Workshop AML 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Based on these, we propose principled adversarial sampling algorithms in both supervised and unsupervised scenarios.

Efficient Sampling for Generative Adversarial Networks with Coupling Markov Chains

no code implementations1 Jan 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).

Decoder-free Robustness Disentanglement without (Additional) Supervision

no code implementations2 Jul 2020 Yifei Wang, Dan Peng, Furui Liu, Zhenguo Li, Zhitang Chen, Jiansheng Yang

Adversarial Training (AT) is proposed to alleviate the adversarial vulnerability of machine learning models by extracting only robust features from the input, which, however, inevitably leads to severe accuracy reduction as it discards the non-robust yet useful features.

BIG-bench Machine Learning Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.