Search Results for author: Yifei Wang

Found 71 papers, 31 papers with code

How to Craft Backdoors with Unlabeled Data Alone?

no code implementations10 Apr 2024 Yifei Wang, Wenhan Ma, Yisen Wang

Relying only on unlabeled data, Self-supervised learning (SSL) can learn rich features in an economical and scalable way.

Backdoor Attack Self-Supervised Learning

SM2C: Boost the Semi-supervised Segmentation for Medical Image by using Meta Pseudo Labels and Mixed Images

no code implementations24 Mar 2024 Yifei Wang, Chuhong Zhu

This method uses three strategies - scaling-up image size, multi-class mixing, and object shape jittering - to improve the ability to learn semantic features within medical images.

Image Segmentation Medical Image Segmentation +2

Do Generated Data Always Help Contrastive Learning?

1 code implementation19 Mar 2024 Yifei Wang, Jizhe Zhang, Yisen Wang

Contrastive Learning (CL) has emerged as one of the most successful paradigms for unsupervised visual representation learning, yet it often depends on intensive manual data augmentations.

Contrastive Learning Data Augmentation +2

Non-negative Contrastive Learning

1 code implementation19 Mar 2024 Yifei Wang, Qi Zhang, Yaoyu Guo, Yisen Wang

In this paper, we propose Non-negative Contrastive Learning (NCL), a renaissance of Non-negative Matrix Factorization (NMF) aimed at deriving interpretable features.

Contrastive Learning Disentanglement +1

A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features

no code implementations2 Mar 2024 Emi Zeger, Yifei Wang, Aaron Mishkin, Tolga Ergen, Emmanuel Candès, Mert Pilanci

We prove that training neural networks on 1-D data is equivalent to solving a convex Lasso problem with a fixed, explicitly defined dictionary matrix of features.

On the Duality Between Sharpness-Aware Minimization and Adversarial Training

1 code implementation23 Feb 2024 Yihao Zhang, Hangzhou He, Jingyu Zhu, Huanran Chen, Yifei Wang, Zeming Wei

Instead of perturbing the samples, Sharpness-Aware Minimization (SAM) perturbs the model weights during training to find a more flat loss landscape and improve generalization.

Adversarial Robustness

Federated learning-outcome prediction with multi-layer privacy protection

no code implementations25 Dec 2023 Yupei Zhang, Yuxin Li, Yifei Wang, Shuangshuang Wei, Yunan Xu, Xuequn Shang

To this end, this study proposes a distributed grade prediction model, dubbed FecMap, by exploiting the federated learning (FL) framework that preserves the private data of local clients and communicates with others through a global generalized model.

Federated Learning

Erasing Self-Supervised Learning Backdoor by Cluster Activation Masking

1 code implementation13 Dec 2023 Shengsheng Qian, Yifei Wang, Dizhan Xue, Shengjie Zhang, Huaiwen Zhang, Changsheng Xu

After obtaining the threat model trained on the poisoned dataset, our method can precisely detect poisonous samples based on the assumption that masking the backdoor trigger can effectively change the activation of a downstream clustering model.

backdoor defense Self-Supervised Learning

STF: Spatial Temporal Fusion for Trajectory Prediction

1 code implementation29 Nov 2023 Pengqian Han, Partha Roop, Jiamou Liu, Tianzhe Bao, Yifei Wang

The main reason is that the trajectory is a kind of complex data, including spatial and temporal information, which is crucial for accurate prediction.

Graph Attention Trajectory Prediction

Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes

no code implementations18 Nov 2023 Yifei Wang, Mert Pilanci

Using this convex formulation, we prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.

Asymmetric Contrastive Multimodal Learning for Advancing Chemical Understanding

no code implementations11 Nov 2023 Hao Xu, Yifei Wang, Yunrui Li, Pengyu Hong

Through practical tasks such as isomer discrimination and uncovering crucial chemical properties for drug discovery, ACML exhibits its capability to revolutionize chemical research and applications, providing a deeper understanding of chemical semantics of different modalities.

Contrastive Learning Drug Discovery +2

Adversarial Examples Are Not Real Features

1 code implementation NeurIPS 2023 Ang Li, Yifei Wang, Yiwen Guo, Yisen Wang

A well-known theory by \citet{ilyas2019adversarial} explains adversarial vulnerability from a data perspective by showing that one can extract non-robust features from adversarial examples and these features alone are useful for classification.

Contrastive Learning Self-Supervised Learning

Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding

3 code implementations NeurIPS 2023 Jiangyan Ma, Yifei Wang, Yisen Wang

However, from a theoretical perspective, the universal expressive power of spectral embedding comes at the price of losing two important invariance properties of graphs, sign and basis invariance, which also limits its effectiveness on graph data.

Graph Classification Graph Embedding +1

Natural Language Interfaces for Tabular Data Querying and Visualization: A Survey

no code implementations27 Oct 2023 Weixu Zhang, Yifei Wang, Yuanfeng Song, Victor Junqiu Wei, Yuxing Tian, Yiyan Qi, Jonathan H. Chan, Raymond Chi-Wing Wong, Haiqin Yang

This survey presents a comprehensive overview of natural language interfaces for tabular data querying and visualization, which allow users to interact with data using natural language queries.

Data Interaction Data Visualization +3

CSG: Curriculum Representation Learning for Signed Graph

no code implementations17 Oct 2023 Zeyu Zhang, Jiamou Liu, Kaiqi Zhao, Yifei Wang, Pengqian Han, Xianda Zheng, Qiqi Wang, Zijian Zhang

Signed graphs are valuable for modeling complex relationships with positive and negative connections, and Signed Graph Neural Networks (SGNNs) have become crucial tools for their analysis.

Link Sign Prediction Representation Learning

Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations

no code implementations10 Oct 2023 Zeming Wei, Yifei Wang, Yisen Wang

Large Language Models (LLMs) have shown remarkable success in various tasks, but concerns about their safety and the potential for generating malicious content have emerged.

In-Context Learning Language Modelling

Robust Long-Tailed Learning via Label-Aware Bounded CVaR

no code implementations29 Aug 2023 Hong Zhu, Runpeng Yu, Xing Tang, Yifei Wang, Yuan Fang, Yisen Wang

Data in the real-world classification problems are always imbalanced or long-tailed, wherein the majority classes have the most of the samples that dominate the model training.

Rethinking Weak Supervision in Helping Contrastive Learning

no code implementations7 Jun 2023 Jingyi Cui, Weiran Huang, Yifei Wang, Yisen Wang

Therefore, to explore the mechanical differences between semi-supervised and noisy-labeled information in helping contrastive learning, we establish a unified theoretical framework of contrastive learning under weak supervision.

Contrastive Learning Denoising +1

On the Generalization of Multi-modal Contrastive Learning

1 code implementation7 Jun 2023 Qi Zhang, Yifei Wang, Yisen Wang

Multi-modal contrastive learning (MMCL) has recently garnered considerable interest due to its superior performance in visual tasks, achieved by embedding multi-modal data, such as visual-language pairs.

Contrastive Learning

Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation

1 code implementation29 May 2023 Yifei Wang, Zhengyang Zhou, Liqin Wang, John Laurentiev, Peter Hou, Li Zhou, Pengyu Hong

The confounding factors, which are non-sensitive variables but manifest systematic differences, can significantly affect fairness evaluation.

Decision Making Fairness

Contrastive Label Enhancement

no code implementations16 May 2023 Yifei Wang, Yiyang Zhou, Jihua Zhu, Xinyuan Liu, Wenbiao Yan, Zhiqiang Tian

Label distribution learning (LDL) is a new machine learning paradigm for solving label ambiguity.

Contrastive Learning

CFA: Class-wise Calibrated Fair Adversarial Training

1 code implementation CVPR 2023 Zeming Wei, Yifei Wang, Yiwen Guo, Yisen Wang

Adversarial training has been widely acknowledged as the most effective method to improve the adversarial robustness against adversarial examples for Deep Neural Networks (DNNs).

Adversarial Robustness Fairness

ContraNorm: A Contrastive Learning Perspective on Oversmoothing and Beyond

2 code implementations12 Mar 2023 Xiaojun Guo, Yifei Wang, Tianqi Du, Yisen Wang

Instead of characterizing oversmoothing from the view of complete collapse in which representations converge to a single point, we dive into a more general perspective of dimensional collapse in which representations lie in a narrow cone.

Contrastive Learning

A Message Passing Perspective on Learning Dynamics of Contrastive Learning

1 code implementation8 Mar 2023 Yifei Wang, Qi Zhang, Tianqi Du, Jiansheng Yang, Zhouchen Lin, Yisen Wang

In recent years, contrastive learning achieves impressive results on self-supervised visual representation learning, but there still lacks a rigorous understanding of its learning dynamics.

Contrastive Learning Graph Attention +1

Towards a Unified Theoretical Understanding of Non-contrastive Learning via Rank Differential Mechanism

1 code implementation4 Mar 2023 Zhijian Zhuo, Yifei Wang, Jinwen Ma, Yisen Wang

In this work, we propose a unified theoretical understanding for existing variants of non-contrastive learning.

Contrastive Learning

Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning

1 code implementation2 Mar 2023 Rundong Luo, Yifei Wang, Yisen Wang

Motivated by this observation, we revisit existing self-AT methods and discover an inherent dilemma that affects self-AT robustness: either strong or weak data augmentations are harmful to self-AT, and a medium strength is insufficient to bridge the gap.

Contrastive Learning Data Augmentation +1

Multi-view Semantic Consistency based Information Bottleneck for Clustering

no code implementations28 Feb 2023 Wenbiao Yan, Jihua Zhu, Yiyang Zhou, Yifei Wang, Qinghai Zheng

In this way, the learned semantic consistency from multi-view data can improve the information bottleneck to more exactly distinguish the consistent information and learn a unified feature representation with more discriminative consistent information for clustering.

Clustering

MCoCo: Multi-level Consistency Collaborative Multi-view Clustering

no code implementations26 Feb 2023 Yiyang Zhou, Qinghai Zheng, Wenbiao Yan, Yifei Wang, Pengcheng Shi, Jihua Zhu

Further, we designed a multi-level consistency collaboration strategy, which utilizes the consistent information of semantic space as a self-supervised signal to collaborate with the cluster assignments in feature space.

Clustering Contrastive Learning +2

USER: Unsupervised Structural Entropy-based Robust Graph Neural Network

1 code implementation12 Feb 2023 Yifei Wang, Yupan Wang, Zeyu Zhang, Song Yang, Kaiqi Zhao, Jiamou Liu

To this end, we propose USER, an unsupervised robust version of graph neural networks that is based on structural entropy.

Link Prediction Node Clustering

On the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution Generalization

no code implementations18 Dec 2022 Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang

Extensive experiments show that our proposed DAT can effectively remove domain-varying features and improve OOD generalization under both correlation shift and diversity shift.

Out-of-Distribution Generalization

Biomedical image analysis competitions: The state of current participation practice

no code implementations16 Dec 2022 Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu Dietlinde Tizabi, Fabian Isensee, Tim J. Adler, Patrick Godau, Veronika Cheplygina, Michal Kozubek, Sharib Ali, Anubha Gupta, Jan Kybic, Alison Noble, Carlos Ortiz de Solórzano, Samiksha Pachade, Caroline Petitjean, Daniel Sage, Donglai Wei, Elizabeth Wilden, Deepak Alapatt, Vincent Andrearczyk, Ujjwal Baid, Spyridon Bakas, Niranjan Balu, Sophia Bano, Vivek Singh Bawa, Jorge Bernal, Sebastian Bodenstedt, Alessandro Casella, Jinwook Choi, Olivier Commowick, Marie Daum, Adrien Depeursinge, Reuben Dorent, Jan Egger, Hannah Eichhorn, Sandy Engelhardt, Melanie Ganz, Gabriel Girard, Lasse Hansen, Mattias Heinrich, Nicholas Heller, Alessa Hering, Arnaud Huaulmé, Hyunjeong Kim, Bennett Landman, Hongwei Bran Li, Jianning Li, Jun Ma, Anne Martel, Carlos Martín-Isla, Bjoern Menze, Chinedu Innocent Nwoye, Valentin Oreiller, Nicolas Padoy, Sarthak Pati, Kelly Payette, Carole Sudre, Kimberlin Van Wijnen, Armine Vardazaryan, Tom Vercauteren, Martin Wagner, Chuanbo Wang, Moi Hoon Yap, Zeyun Yu, Chun Yuan, Maximilian Zenk, Aneeq Zia, David Zimmerer, Rina Bao, Chanyeol Choi, Andrew Cohen, Oleh Dzyubachyk, Adrian Galdran, Tianyuan Gan, Tianqi Guo, Pradyumna Gupta, Mahmood Haithami, Edward Ho, Ikbeom Jang, Zhili Li, Zhengbo Luo, Filip Lux, Sokratis Makrogiannis, Dominik Müller, Young-tack Oh, Subeen Pang, Constantin Pape, Gorkem Polat, Charlotte Rosalie Reed, Kanghyun Ryu, Tim Scherr, Vajira Thambawita, Haoyu Wang, Xinliang Wang, Kele Xu, Hung Yeh, Doyeob Yeo, Yixuan Yuan, Yan Zeng, Xin Zhao, Julian Abbing, Jannes Adam, Nagesh Adluru, Niklas Agethen, Salman Ahmed, Yasmina Al Khalil, Mireia Alenyà, Esa Alhoniemi, Chengyang An, Talha Anwar, Tewodros Weldebirhan Arega, Netanell Avisdris, Dogu Baran Aydogan, Yingbin Bai, Maria Baldeon Calisto, Berke Doga Basaran, Marcel Beetz, Cheng Bian, Hao Bian, Kevin Blansit, Louise Bloch, Robert Bohnsack, Sara Bosticardo, Jack Breen, Mikael Brudfors, Raphael Brüngel, Mariano Cabezas, Alberto Cacciola, Zhiwei Chen, Yucong Chen, Daniel Tianming Chen, Minjeong Cho, Min-Kook Choi, Chuantao Xie Chuantao Xie, Dana Cobzas, Julien Cohen-Adad, Jorge Corral Acero, Sujit Kumar Das, Marcela de Oliveira, Hanqiu Deng, Guiming Dong, Lars Doorenbos, Cory Efird, Sergio Escalera, Di Fan, Mehdi Fatan Serj, Alexandre Fenneteau, Lucas Fidon, Patryk Filipiak, René Finzel, Nuno R. Freitas, Christoph M. Friedrich, Mitchell Fulton, Finn Gaida, Francesco Galati, Christoforos Galazis, Chang Hee Gan, Zheyao Gao, Shengbo Gao, Matej Gazda, Beerend Gerats, Neil Getty, Adam Gibicar, Ryan Gifford, Sajan Gohil, Maria Grammatikopoulou, Daniel Grzech, Orhun Güley, Timo Günnemann, Chunxu Guo, Sylvain Guy, Heonjin Ha, Luyi Han, Il Song Han, Ali Hatamizadeh, Tian He, Jimin Heo, Sebastian Hitziger, SeulGi Hong, Seungbum Hong, Rian Huang, Ziyan Huang, Markus Huellebrand, Stephan Huschauer, Mustaffa Hussain, Tomoo Inubushi, Ece Isik Polat, Mojtaba Jafaritadi, SeongHun Jeong, Bailiang Jian, Yuanhong Jiang, Zhifan Jiang, Yueming Jin, Smriti Joshi, Abdolrahim Kadkhodamohammadi, Reda Abdellah Kamraoui, Inha Kang, Junghwa Kang, Davood Karimi, April Khademi, Muhammad Irfan Khan, Suleiman A. Khan, Rishab Khantwal, Kwang-Ju Kim, Timothy Kline, Satoshi Kondo, Elina Kontio, Adrian Krenzer, Artem Kroviakov, Hugo Kuijf, Satyadwyoom Kumar, Francesco La Rosa, Abhi Lad, Doohee Lee, Minho Lee, Chiara Lena, Hao Li, Ling Li, Xingyu Li, Fuyuan Liao, Kuanlun Liao, Arlindo Limede Oliveira, Chaonan Lin, Shan Lin, Akis Linardos, Marius George Linguraru, Han Liu, Tao Liu, Di Liu, Yanling Liu, João Lourenço-Silva, Jingpei Lu, Jiangshan Lu, Imanol Luengo, Christina B. Lund, Huan Minh Luu, Yi Lv, Uzay Macar, Leon Maechler, Sina Mansour L., Kenji Marshall, Moona Mazher, Richard McKinley, Alfonso Medela, Felix Meissen, Mingyuan Meng, Dylan Miller, Seyed Hossein Mirjahanmardi, Arnab Mishra, Samir Mitha, Hassan Mohy-ud-Din, Tony Chi Wing Mok, Gowtham Krishnan Murugesan, Enamundram Naga Karthik, Sahil Nalawade, Jakub Nalepa, Mohamed Naser, Ramin Nateghi, Hammad Naveed, Quang-Minh Nguyen, Cuong Nguyen Quoc, Brennan Nichyporuk, Bruno Oliveira, David Owen, Jimut Bahan Pal, Junwen Pan, Wentao Pan, Winnie Pang, Bogyu Park, Vivek Pawar, Kamlesh Pawar, Michael Peven, Lena Philipp, Tomasz Pieciak, Szymon Plotka, Marcel Plutat, Fattaneh Pourakpour, Domen Preložnik, Kumaradevan Punithakumar, Abdul Qayyum, Sandro Queirós, Arman Rahmim, Salar Razavi, Jintao Ren, Mina Rezaei, Jonathan Adam Rico, ZunHyan Rieu, Markus Rink, Johannes Roth, Yusely Ruiz-Gonzalez, Numan Saeed, Anindo Saha, Mostafa Salem, Ricardo Sanchez-Matilla, Kurt Schilling, Wei Shao, Zhiqiang Shen, Ruize Shi, Pengcheng Shi, Daniel Sobotka, Théodore Soulier, Bella Specktor Fadida, Danail Stoyanov, Timothy Sum Hon Mun, Xiaowu Sun, Rong Tao, Franz Thaler, Antoine Théberge, Felix Thielke, Helena Torres, Kareem A. Wahid, Jiacheng Wang, Yifei Wang, Wei Wang, Xiong Wang, Jianhui Wen, Ning Wen, Marek Wodzinski, Ye Wu, Fangfang Xia, Tianqi Xiang, Chen Xiaofei, Lizhan Xu, Tingting Xue, Yuxuan Yang, Lin Yang, Kai Yao, Huifeng Yao, Amirsaeed Yazdani, Michael Yip, Hwanseung Yoo, Fereshteh Yousefirizi, Shunkai Yu, Lei Yu, Jonathan Zamora, Ramy Ashraf Zeineldin, Dewen Zeng, Jianpeng Zhang, Bokai Zhang, Jiapeng Zhang, Fan Zhang, Huahong Zhang, Zhongchen Zhao, Zixuan Zhao, Jiachen Zhao, Can Zhao, Qingshuo Zheng, Yuheng Zhi, Ziqi Zhou, Baosheng Zou, Klaus Maier-Hein, Paul F. Jäger, Annette Kopp-Schneider, Lena Maier-Hein

Of these, 84% were based on standard architectures.

Benchmarking

Beyond the Best: Estimating Distribution Functionals in Infinite-Armed Bandits

no code implementations1 Nov 2022 Yifei Wang, Tavor Baharav, Yanjun Han, Jiantao Jiao, David Tse

In the infinite-armed bandit problem, each arm's average reward is sampled from an unknown distribution, and each arm can be sampled further to obtain noisy estimates of the average reward of that arm.

How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders

2 code implementations15 Oct 2022 Qi Zhang, Yifei Wang, Yisen Wang

Masked Autoencoders (MAE) based on a reconstruction task have risen to be a promising paradigm for self-supervised learning (SSL) and achieve state-of-the-art performance across different benchmark datasets.

Contrastive Learning Self-Supervised Learning

When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture

1 code implementation14 Oct 2022 Yichuan Mo, Dongxian Wu, Yifei Wang, Yiwen Guo, Yisen Wang

We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs.

Adversarial Robustness

Overparameterized ReLU Neural Networks Learn the Simplest Models: Neural Isometry and Exact Recovery

1 code implementation30 Sep 2022 Yifei Wang, Yixuan Hua, Emmanuel Candés, Mert Pilanci

For randomly generated data, we show the existence of a phase transition in recovering planted neural network models, which is easy to describe: whenever the ratio between the number of samples and the dimension exceeds a numerical threshold, the recovery succeeds with high probability; otherwise, it fails with high probability.

Motif-based Graph Representation Learning with Application to Chemical Molecules

1 code implementation9 Aug 2022 Yifei Wang, Shiyang Chen, Guobin Chen, Ethan Shurberg, Hang Liu, Pengyu Hong

MCM builds a motif vocabulary in an unsupervised way and deploys a novel motif convolution operation to extract the local structural context of individual nodes, which is then used to learn higher-level node representations via multilayer perceptron and/or message passing in graph neural networks.

Graph Learning Graph Representation Learning

Optimization-Induced Graph Implicit Nonlinear Diffusion

1 code implementation29 Jun 2022 Qi Chen, Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Moreover, we show that the optimization-induced variants of our models can boost the performance and improve training stability and efficiency as well.

A DTCWT-SVD Based Video Watermarking resistant to frame rate conversion

no code implementations2 Jun 2022 Yifei Wang, Qichao Ying, Zhenxing Qian, Sheng Li, Xinpeng Zhang

To address this issue, we present a new video watermarking based on joint Dual-Tree Cosine Wavelet Transformation (DTCWT) and Singular Value Decomposition (SVD), which is resistant to frame rate conversion.

Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization

1 code implementation26 May 2022 Yifei Wang, Peng Chen, Mert Pilanci, Wuchen Li

We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation.

Bayesian Inference

A stochastic Stein Variational Newton method

1 code implementation19 Apr 2022 Alex Leviyev, Joshua Chen, Yifei Wang, Omar Ghattas, Aaron Zimmerman

Meanwhile, Stein variational Newton (SVN), a Newton-like extension of SVGD, dramatically accelerates the convergence of SVGD by incorporating Hessian information into the dynamics, but also produces biased samples.

Bayesian Inference

Knowledgebra: An Algebraic Learning Framework for Knowledge Graph

no code implementations15 Apr 2022 Tong Yang, Yifei Wang, Long Sha, Jan Engelbrecht, Pengyu Hong

As far as we know, by applying abstract algebra in statistical learning, this work develops the first formal language for general knowledge graphs, and also sheds light on the problem of neural-symbolic integration from an algebraic perspective.

Abstract Algebra General Knowledge +3

A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training

no code implementations ICLR 2022 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

On the other hand, our unified framework can be extended to the unsupervised scenario, which interprets unsupervised contrastive learning as an important sampling of CEM.

Contrastive Learning

Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap

1 code implementation25 Mar 2022 Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Our theory suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and the overlapped augmented views (i. e., the chaos) create a ladder for contrastive learning to gradually learn class-separated representations.

Contrastive Learning Model Selection +1

Fooling Adversarial Training with Inducing Noise

no code implementations19 Nov 2021 Zhirui Wang, Yifei Wang, Yisen Wang

Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.

Adversarial Attack

Residual Relaxation for Multi-view Representation Learning

no code implementations NeurIPS 2021 Yifei Wang, Zhengyang Geng, Feng Jiang, Chuming Li, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the choice of data augmentation.

Data Augmentation Representation Learning

Parallel Deep Neural Networks Have Zero Duality Gap

no code implementations13 Oct 2021 Yifei Wang, Tolga Ergen, Mert Pilanci

Recent work has proven that the strong duality holds (which means zero duality gap) for regularized finite-width two-layer ReLU networks and consequently provided an equivalent convex training problem.

The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program

no code implementations ICLR 2022 Yifei Wang, Mert Pilanci

We then show that the limit points of non-convex subgradient flows can be identified via primal-dual correspondence in this convex optimization problem.

Fooling Adversarial Training with Induction Noise

no code implementations29 Sep 2021 Zhirui Wang, Yifei Wang, Yisen Wang

Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.

Adversarial Attack

Chaos is a Ladder: A New Understanding of Contrastive Learning

no code implementations ICLR 2022 Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Our work suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and it is the overlapping augmented views (i. e., the chaos) that create a ladder for contrastive learning to gradually learn class-separated representations.

Contrastive Learning Self-Supervised Learning

The Hidden Convex Optimization Landscape of Regularized Two-Layer ReLU Networks: an Exact Characterization of Optimal Solutions

no code implementations ICLR 2022 Yifei Wang, Jonathan Lacotte, Mert Pilanci

As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.

Domain-wise Adversarial Training for Out-of-Distribution Generalization

no code implementations29 Sep 2021 Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang

Extensive experiments show that our proposed DAT can effectively remove the domain-varying features and improve OOD generalization on both correlation shift and diversity shift tasks.

Out-of-Distribution Generalization

Reparameterized Sampling for Generative Adversarial Networks

1 code implementation1 Jul 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).

Demystifying Adversarial Training via A Unified Probabilistic Framework

no code implementations ICML Workshop AML 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Based on these, we propose principled adversarial sampling algorithms in both supervised and unsupervised scenarios.

Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality

no code implementations15 May 2021 Jonathan Lacotte, Yifei Wang, Mert Pilanci

Our first contribution is to show that, at each iteration, the embedding dimension (or sketch size) can be as small as the effective dimension of the Hessian matrix.

Projected Wasserstein gradient descent for high-dimensional Bayesian inference

1 code implementation12 Feb 2021 Yifei Wang, Peng Chen, Wuchen Li

We propose a projected Wasserstein gradient descent method (pWGD) for high-dimensional Bayesian inference problems.

Bayesian Inference Density Estimation +1

Efficient Sampling for Generative Adversarial Networks with Coupling Markov Chains

no code implementations1 Jan 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).

Train Once, and Decode As You Like

no code implementations COLING 2020 Chao Tian, Yifei Wang, Hao Cheng, Yijiang Lian, Zhihua Zhang

In this paper we propose a unified approach for supporting different generation manners of machine translation, including autoregressive, semi-autoregressive, and refinement-based non-autoregressive models.

Machine Translation Translation

Decoder-free Robustness Disentanglement without (Additional) Supervision

no code implementations2 Jul 2020 Yifei Wang, Dan Peng, Furui Liu, Zhenguo Li, Zhitang Chen, Jiansheng Yang

Adversarial Training (AT) is proposed to alleviate the adversarial vulnerability of machine learning models by extracting only robust features from the input, which, however, inevitably leads to severe accuracy reduction as it discards the non-robust yet useful features.

BIG-bench Machine Learning Disentanglement

The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural Networks: an Exact Characterization of the Optimal Solutions

no code implementations10 Jun 2020 Yifei Wang, Jonathan Lacotte, Mert Pilanci

As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.

Information Newton's flow: second-order optimization method in probability space

no code implementations13 Jan 2020 Yifei Wang, Wuchen Li

We introduce a framework for Newton's flows in probability space with information metrics, named information Newton's flows.

Regularized Non-negative Spectral Embedding for Clustering

no code implementations1 Nov 2019 Yifei Wang, Rui Liu, Yong Chen, Hui Zhangs, Zhiwen Ye

Spectral Clustering is a popular technique to split data points into groups, especially for complex datasets.

Clustering

Accelerated Information Gradient flow

1 code implementation4 Sep 2019 Yifei Wang, Wuchen Li

We present a framework for Nesterov's accelerated gradient flows in probability space to design efficient mean-field Markov chain Monte Carlo (MCMC) algorithms for Bayesian inverse problems.

Bayesian Inference

Deep Domain Adaptation by Geodesic Distance Minimization

no code implementations13 Jul 2017 Yifei Wang, Wen Li, Dengxin Dai, Luc van Gool

Our work builds on the recently proposed Deep CORAL method, which proposed to train a convolutional neural network and simultaneously minimize the Euclidean distance of convariance matrices between the source and target domains.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.