Search Results for author: Yao-Liang Yu

Found 44 papers, 13 papers with code

Stronger and Faster Wasserstein Adversarial Attacks

1 code implementation ICML 2020 Kaiwen Wu, Allen Houze Wang, Yao-Liang Yu

While the majority of existing attacks focus on measuring perturbations under the $\ell_p$ metric, Wasserstein distance, which takes geometry in pixel space into account, has long been known to be a suitable metric for measuring image quality and has recently risen as a compelling alternative to the $\ell_p$ metric in adversarial attacks.

Newton-type Methods for Minimax Optimization

1 code implementation25 Jun 2020 Guojun Zhang, Kaiwen Wu, Pascal Poupart, Yao-Liang Yu

We prove their local convergence at strict local minimax points, which are surrogates of global solutions.

Reinforcement Learning (RL) Vocal Bursts Type Prediction

Federated Learning Meets Multi-objective Optimization

2 code implementations20 Jun 2020 Zeou Hu, Kiarash Shaloudegi, Guojun Zhang, Yao-Liang Yu

Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edge devices while keeping private user data strictly on device.

Fairness Federated Learning

Density Deconvolution with Normalizing Flows

1 code implementation16 Jun 2020 Tim Dockhorn, James A. Ritchie, Yao-Liang Yu, Iain Murray

Density deconvolution is the task of estimating a probability density function given only noise-corrupted samples.

Density Estimation Variational Inference

Network Comparison with Interpretable Contrastive Network Representation Learning

3 code implementations25 May 2020 Takanori Fujiwara, Jian Zhao, Francine Chen, Yao-Liang Yu, Kwan-Liu Ma

This analysis task could be greatly assisted by contrastive learning, which is an emerging analysis approach to discover salient patterns in one dataset relative to another.

Contrastive Learning Representation Learning

Showing Your Work Doesn't Always Work

1 code implementation ACL 2020 Raphael Tang, Jaejun Lee, Ji Xin, Xinyu Liu, Yao-Liang Yu, Jimmy Lin

In natural language processing, a recently popular line of work explores how to best report the experimental results of neural networks.

DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference

3 code implementations ACL 2020 Ji Xin, Raphael Tang, Jaejun Lee, Yao-Liang Yu, Jimmy Lin

Large-scale pre-trained language models such as BERT have brought significant improvements to NLP applications.

A Positivstellensatz for Conditional SAGE Signomials

no code implementations8 Mar 2020 Allen Houze Wang, Priyank Jaini, Yao-Liang Yu, Pascal Poupart

Recently, the conditional SAGE certificate has been proposed as a sufficient condition for signomial positivity over a convex set.

Optimality and Stability in Non-Convex Smooth Games

no code implementations27 Feb 2020 Guojun Zhang, Pascal Poupart, Yao-Liang Yu

Convergence to a saddle point for convex-concave functions has been studied for decades, while recent years has seen a surge of interest in non-convex (zero-sum) smooth games, motivated by their recent wide applications.

Unsupervised Multilingual Alignment using Wasserstein Barycenter

1 code implementation28 Jan 2020 Xin Lian, Kshitij Jain, Jakub Truszkowski, Pascal Poupart, Yao-Liang Yu

We study unsupervised multilingual alignment, the problem of finding word-to-word translations between multiple languages without using any parallel data.

Translation Unsupervised Machine Translation +1

Multivariate Triangular Quantile Maps for Novelty Detection

1 code implementation NeurIPS 2019 Jingjing Wang, Sun Sun, Yao-Liang Yu

Novelty detection, a fundamental task in machine learning, has drawn a lot of recent attention due to its wide-ranging applications and the rise of neural approaches.

Density Estimation Novelty Detection

Exploiting Token and Path-based Representations of Code for Identifying Security-Relevant Commits

no code implementations15 Nov 2019 Achyudh Ram, Ji Xin, Meiyappan Nagappan, Yao-Liang Yu, Rocío Cabrera Lozoya, Antonino Sabetta, Jimmy Lin

Public vulnerability databases such as CVE and NVD account for only 60% of security vulnerabilities present in open-source projects, and are known to suffer from inconsistent quality.

What Part of the Neural Network Does This? Understanding LSTMs by Measuring and Dissecting Neurons

no code implementations IJCNLP 2019 Ji Xin, Jimmy Lin, Yao-Liang Yu

Memory neurons of long short-term memory (LSTM) networks encode and process information in powerful yet mysterious ways.

Convergence of Gradient Methods on Bilinear Zero-Sum Games

1 code implementation ICLR 2020 Guojun Zhang, Yao-Liang Yu

Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, while understanding the dynamics of gradient algorithms for solving such formulations has remained a grand challenge.

Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin

no code implementations26 Jul 2019 Kaiwen Wu, Yao-Liang Yu

Deep models, while being extremely versatile and accurate, are vulnerable to adversarial attacks: slight perturbations that are imperceptible to humans can completely flip the prediction of deep models.

Adversarial Robustness

Tails of Lipschitz Triangular Flows

no code implementations ICML 2020 Priyank Jaini, Ivan Kobyzev, Yao-Liang Yu, Marcus Brubaker

We investigate the ability of popular flow based methods to capture tail-properties of a target density by studying the increasing triangular maps used in these flow methods acting on a tractable source density.

Distributional Reinforcement Learning for Efficient Exploration

no code implementations13 May 2019 Borislav Mavrin, Shangtong Zhang, Hengshuai Yao, Linglong Kong, Kaiwen Wu, Yao-Liang Yu

In distributional reinforcement learning (RL), the estimated distribution of value function models both the parametric and intrinsic uncertainties.

Atari Games Distributional Reinforcement Learning +4

Sum-of-Squares Polynomial Flow

2 code implementations7 May 2019 Priyank Jaini, Kira A. Selby, Yao-Liang Yu

Triangular map is a recent construct in probability theory that allows one to transform any source probability density function to any target density function.

Density Estimation

Deep Homogeneous Mixture Models: Representation, Separation, and Approximation

no code implementations NeurIPS 2018 Priyank Jaini, Pascal Poupart, Yao-Liang Yu

At their core, many unsupervised learning models provide a compact representation of homogeneous density mixtures, but their similarities and differences are not always clearly understood.

Density Estimation

Robust Multiple Kernel k-means Clustering using Min-Max Optimization

1 code implementation6 Mar 2018 Seojin Bang, Yao-Liang Yu, Wei Wu

To address this problem and inspired by recent works in adversarial learning, we propose a multiple kernel clustering method with the min-max framework that aims to be robust to such adversarial perturbation.

Clustering Disease Prediction +1

Bregman Divergence for Stochastic Variance Reduction: Saddle-Point and Adversarial Prediction

no code implementations NeurIPS 2017 Zhan Shi, Xinhua Zhang, Yao-Liang Yu

Adversarial machines, where a learner competes against an adversary, have regained much recent interest in machine learning.

Provably noise-robust, regularised $k$-means clustering

no code implementations30 Nov 2017 Shrinu Kushagra, Yao-Liang Yu, Shai Ben-David

We focus on the $k$-means objective and we prove that the regularised version of $k$-means is NP-Hard even for $k=1$.

Clustering

Learning Latent Space Models with Angular Constraints

no code implementations ICML 2017 Pengtao Xie, Yuntian Deng, Yi Zhou, Abhimanu Kumar, Yao-Liang Yu, James Zou, Eric P. Xing

The large model capacity of latent space models (LSMs) enables them to achieve great performance on various applications, but meanwhile renders LSMs to be prone to overfitting.

Diversity

Convex-constrained Sparse Additive Modeling and Its Extensions

no code implementations1 May 2017 Junming Yin, Yao-Liang Yu

Sparse additive modeling is a class of effective methods for performing high-dimensional nonparametric regression.

Additive models regression

Dropout with Expectation-linear Regularization

no code implementations26 Sep 2016 Xuezhe Ma, Yingkai Gao, Zhiting Hu, Yao-Liang Yu, Yuntian Deng, Eduard Hovy

Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an \emph{explicit} control of the gap.

Image Classification

Closed-Form Training of Mahalanobis Distance for Supervised Clustering

no code implementations CVPR 2016 Marc T. Law, Yao-Liang Yu, Matthieu Cord, Eric P. Xing

Clustering is the task of grouping a set of objects so that objects in the same cluster are more similar to each other than to those in other clusters.

Clustering Metric Learning +1

Additive Approximations in High Dimensional Nonparametric Regression via the SALSA

2 code implementations31 Jan 2016 Kirthevasan Kandasamy, Yao-Liang Yu

Between non-additive models which often have large variance and first order additive models which have large bias, there has been little work to exploit the trade-off in the middle via additive models of intermediate order.

Additive models regression +1

Distributed Machine Learning via Sufficient Factor Broadcasting

no code implementations26 Nov 2015 Pengtao Xie, Jin Kyu Kim, Yi Zhou, Qirong Ho, Abhimanu Kumar, Yao-Liang Yu, Eric Xing

Matrix-parametrized models, including multiclass logistic regression and sparse coding, are used in machine learning (ML) applications ranging from computer vision to computational biology.

BIG-bench Machine Learning

Generalized Conditional Gradient for Sparse Estimation

no code implementations17 Oct 2014 Yao-Liang Yu, Xinhua Zhang, Dale Schuurmans

Structured sparsity is an important modeling tool that expands the applicability of convex formulations for data analysis, however it also creates significant challenges for efficient algorithm design.

Dictionary Learning Matrix Completion +1

Distributed Machine Learning via Sufficient Factor Broadcasting

no code implementations19 Sep 2014 Pengtao Xie, Jin Kyu Kim, Yi Zhou, Qirong Ho, Abhimanu Kumar, Yao-Liang Yu, Eric Xing

Matrix-parametrized models, including multiclass logistic regression and sparse coding, are used in machine learning (ML) applications ranging from computer vision to computational biology.

BIG-bench Machine Learning

Petuum: A New Platform for Distributed Machine Learning on Big Data

no code implementations30 Dec 2013 Eric P. Xing, Qirong Ho, Wei Dai, Jin Kyu Kim, Jinliang Wei, Seunghak Lee, Xun Zheng, Pengtao Xie, Abhimanu Kumar, Yao-Liang Yu

What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100s of billions of parameters) on Big Data (up to terabytes or petabytes)?

BIG-bench Machine Learning Scheduling

Polar Operators for Structured Sparse Estimation

no code implementations NeurIPS 2013 Xinhua Zhang, Yao-Liang Yu, Dale Schuurmans

Structured sparse estimation has become an important technique in many areas of data analysis.

Better Approximation and Faster Algorithm Using the Proximal Average

no code implementations NeurIPS 2013 Yao-Liang Yu

It is a common practice to approximate complicated'' functions with more friendly ones.

On Decomposing the Proximal Map

no code implementations NeurIPS 2013 Yao-Liang Yu

The proximal map is the key step in gradient-type algorithms, which have become prevalent in large-scale high-dimensional problems.

A Polynomial-time Form of Robust Regression

no code implementations NeurIPS 2012 Yao-Liang Yu, Özlem Aslan, Dale Schuurmans

Despite the variety of robust regression methods that have been developed, current regression formulations are either NP-hard, or allow unbounded response to even a single leverage point.

regression

Convex Multi-view Subspace Learning

no code implementations NeurIPS 2012 Martha White, Xinhua Zhang, Dale Schuurmans, Yao-Liang Yu

Subspace learning seeks a low dimensional representation of data that enables accurate reconstruction.

Relaxed Clipping: A Global Training Method for Robust Regression and Classification

no code implementations NeurIPS 2010 Min Yang, Linli Xu, Martha White, Dale Schuurmans, Yao-Liang Yu

We present a generic procedure that can be applied to standard loss functions and demonstrate improved robustness in regression and classification problems.

Classification General Classification +1

A General Projection Property for Distribution Families

no code implementations NeurIPS 2009 Yao-Liang Yu, Yuxi Li, Dale Schuurmans, Csaba Szepesvári

We prove that linear projections between distribution families with fixed first and second moments are surjective, regardless of dimension.

Cannot find the paper you are looking for? You can Submit a new open access paper.