Search Results for author: Yifei Wang

Found 21 papers, 5 papers with code

Fooling Adversarial Training with Inducing Noise

no code implementations19 Nov 2021 Zhirui Wang, Yifei Wang, Yisen Wang

Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.

Adversarial Attack

Residual Relaxation for Multi-view Representation Learning

no code implementations NeurIPS 2021 Yifei Wang, Zhengyang Geng, Feng Jiang, Chuming Li, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the choice of data augmentation.

Data Augmentation Representation Learning

Parallel Deep Neural Networks Have Zero Duality Gap

no code implementations13 Oct 2021 Yifei Wang, Tolga Ergen, Mert Pilanci

For multi-layer linear networks with vector outputs, we formulate convex dual problems and demonstrate that the duality gap is non-zero for depth three and deeper networks.

Global Optimization

The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program

no code implementations13 Oct 2021 Yifei Wang, Mert Pilanci

We then show that the limit points of non-convex subgradient flows can be identified via primal-dual correspondence in this convex optimization problem.

Predicting the Stereoselectivity of Chemical Transformations by Machine Learning

no code implementations12 Oct 2021 Justin Li, Dakang Zhang, Yifei Wang, Christopher Ye, Hao Xu, Pengyu Hong

Since late 1960s, there have been numerous successes in the exciting new frontier of asymmetric catalysis.

Reparameterized Sampling for Generative Adversarial Networks

1 code implementation1 Jul 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).

Demystifying Adversarial Training via A Unified Probabilistic Framework

no code implementations ICML Workshop AML 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Based on these, we propose principled adversarial sampling algorithms in both supervised and unsupervised scenarios.

Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality

no code implementations15 May 2021 Jonathan Lacotte, Yifei Wang, Mert Pilanci

Our first contribution is to show that, at each iteration, the embedding dimension (or sketch size) can be as small as the effective dimension of the Hessian matrix.

Projected Wasserstein gradient descent for high-dimensional Bayesian inference

1 code implementation12 Feb 2021 Yifei Wang, Peng Chen, Wuchen Li

We propose a projected Wasserstein gradient descent method (pWGD) for high-dimensional Bayesian inference problems.

Bayesian Inference Density Estimation

Efficient Sampling for Generative Adversarial Networks with Coupling Markov Chains

no code implementations1 Jan 2021 Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin

Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).

Train Once, and Decode As You Like

no code implementations COLING 2020 Chao Tian, Yifei Wang, Hao Cheng, Yijiang Lian, Zhihua Zhang

In this paper we propose a unified approach for supporting different generation manners of machine translation, including autoregressive, semi-autoregressive, and refinement-based non-autoregressive models.

Machine Translation Translation

Optimizing AD Pruning of Sponsored Search with Reinforcement Learning

no code implementations5 Aug 2020 Yijiang Lian, Zhijie Chen, Xin Pei, Shuang Li, Yifei Wang, Yuefeng Qiu, Zhiheng Zhang, Zhipeng Tao, Liang Yuan, Hanju Guan, Kefeng Zhang, Zhigang Li, Xiaochun Liu

Industrial sponsored search system (SSS) can be logically divided into three modules: keywords matching, ad retrieving, and ranking.

Decoder-free Robustness Disentanglement without (Additional) Supervision

no code implementations2 Jul 2020 Yifei Wang, Dan Peng, Furui Liu, Zhenguo Li, Zhitang Chen, Jiansheng Yang

Adversarial Training (AT) is proposed to alleviate the adversarial vulnerability of machine learning models by extracting only robust features from the input, which, however, inevitably leads to severe accuracy reduction as it discards the non-robust yet useful features.

Representation Learning

The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural Networks: an Exact Characterization of the Optimal Solutions

no code implementations10 Jun 2020 Yifei Wang, Jonathan Lacotte, Mert Pilanci

As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.

Information Newton's flow: second-order optimization method in probability space

no code implementations13 Jan 2020 Yifei Wang, Wuchen Li

We introduce a framework for Newton's flows in probability space with information metrics, named information Newton's flows.

Regularized Non-negative Spectral Embedding for Clustering

no code implementations1 Nov 2019 Yifei Wang, Rui Liu, Yong Chen, Hui Zhangs, Zhiwen Ye

Spectral Clustering is a popular technique to split data points into groups, especially for complex datasets.

Accelerated Information Gradient flow

1 code implementation4 Sep 2019 Yifei Wang, Wuchen Li

We present a framework for Nesterov's accelerated gradient flows in probability space.

Bayesian Inference

Deep Domain Adaptation by Geodesic Distance Minimization

no code implementations13 Jul 2017 Yifei Wang, Wen Li, Dengxin Dai, Luc van Gool

Our work builds on the recently proposed Deep CORAL method, which proposed to train a convolutional neural network and simultaneously minimize the Euclidean distance of convariance matrices between the source and target domains.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.