Search Results for author: Jialei Wang

Found 23 papers, 1 papers with code

Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization

no code implementations NeurIPS 2018 Blake Woodworth, Jialei Wang, Adam Smith, Brendan Mcmahan, Nathan Srebro

We suggest a general oracle-based framework that captures different parallel stochastic optimization settings described by a dependency graph, and derive generic lower bounds in terms of this graph.

Stochastic Optimization

Distributed Stochastic Multi-Task Learning with Graph Regularization

no code implementations11 Feb 2018 Weiran Wang, Jialei Wang, Mladen Kolar, Nathan Srebro

We propose methods for distributed graph-based multi-task learning that are based on weighted averaging of messages from other machines.

Multi-Task Learning

Gradient Sparsification for Communication-Efficient Distributed Optimization

no code implementations NeurIPS 2018 Jianqiao Wangni, Jialei Wang, Ji Liu, Tong Zhang

Modern large scale machine learning applications require stochastic optimization algorithms to be implemented on distributed computational architectures.

Distributed Optimization

Improved Optimization of Finite Sums with Minibatch Stochastic Variance Reduced Proximal Iterations

no code implementations21 Jun 2017 Jialei Wang, Tong Zhang

We present novel minibatch stochastic optimization methods for empirical risk minimization problems, the methods efficiently leverage variance reduced first-order and sub-sampled higher-order information to accelerate the convergence speed.

Stochastic Optimization

Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms

no code implementations ICML 2017 Jialei Wang, Lin Xiao

We consider empirical risk minimization of linear predictors with convex loss functions.

Efficient coordinate-wise leading eigenvector computation

no code implementations25 Feb 2017 Jialei Wang, Weiran Wang, Dan Garber, Nathan Srebro

We develop and analyze efficient "coordinate-wise" methods for finding the leading eigenvector, where each step involves only a vector-vector product.

Stochastic Canonical Correlation Analysis

no code implementations21 Feb 2017 Chao Gao, Dan Garber, Nathan Srebro, Jialei Wang, Weiran Wang

We study the sample complexity of canonical correlation analysis (CCA), \ie, the number of samples needed to estimate the population canonical correlation and directions up to arbitrarily small error.

Stochastic Optimization

Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch-Prox

no code implementations21 Feb 2017 Jialei Wang, Weiran Wang, Nathan Srebro

We present and analyze an approach for distributed stochastic optimization which is statistically optimal and achieves near-linear speedups (up to logarithmic factors).

Stochastic Optimization

Rate Optimal Estimation and Confidence Intervals for High-dimensional Regression with Missing Covariates

no code implementations9 Feb 2017 Yining Wang, Jialei Wang, Sivaraman Balakrishnan, Aarti Singh

We consider the problems of estimation and of constructing component-wise confidence intervals in a sparse high-dimensional linear regression model when some covariates of the design matrix are missing completely at random.

Sketching Meets Random Projection in the Dual: A Provable Recovery Algorithm for Big and High-dimensional Data

no code implementations10 Oct 2016 Jialei Wang, Jason D. Lee, Mehrdad Mahdavi, Mladen Kolar, Nathan Srebro

Sketching techniques have become popular for scaling up machine learning algorithms by reducing the sample size or dimensionality of massive data sets, while still maintaining the statistical power of big data.

Warm Starting Bayesian Optimization

no code implementations11 Aug 2016 Matthias Poloczek, Jialei Wang, Peter I. Frazier

We develop a framework for warm-starting Bayesian optimization, that reduces the solution time required to solve an optimization problem that is one in a sequence of related problems.

Efficient Distributed Learning with Sparsity

no code implementations ICML 2017 Jialei Wang, Mladen Kolar, Nathan Srebro, Tong Zhang

We propose a novel, efficient approach for distributed sparse learning in high-dimensions, where observations are randomly partitioned across machines.

General Classification Sparse Learning

A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization

no code implementations13 Apr 2016 Shun Zheng, Jialei Wang, Fen Xia, Wei Xu, Tong Zhang

In modern large-scale machine learning applications, the training data are often partitioned and stored on multiple machines.

Efficient Globally Convergent Stochastic Optimization for Canonical Correlation Analysis

no code implementations NeurIPS 2016 Weiran Wang, Jialei Wang, Dan Garber, Nathan Srebro

We study the stochastic optimization of canonical correlation analysis (CCA), whose objective is nonconvex and does not decouple over training samples.

Stochastic Optimization

Distributed Multi-Task Learning with Shared Representation

no code implementations7 Mar 2016 Jialei Wang, Mladen Kolar, Nathan Srebro

We study the problem of distributed multi-task learning with shared representation, where each machine aims to learn a separate, but related, task in an unknown shared low-dimensional subspaces, i. e. when the predictor matrix has low rank.

Multi-Task Learning

Multi-Information Source Optimization

no code implementations NeurIPS 2017 Matthias Poloczek, Jialei Wang, Peter I. Frazier

We consider Bayesian optimization of an expensive-to-evaluate black-box objective function, where we also have access to cheaper approximations of the objective.

Parallel Bayesian Global Optimization of Expensive Functions

no code implementations16 Feb 2016 Jialei Wang, Scott C. Clark, Eric Liu, Peter I. Frazier

We also show that the resulting one-step Bayes optimal algorithm for parallel global optimization finds high-quality solutions with fewer evaluations than a heuristic based on approximately maximizing the q-EI.

Global Optimization

Reducing Runtime by Recycling Samples

no code implementations5 Feb 2016 Jialei Wang, Hai Wang, Nathan Srebro

Contrary to the situation with stochastic gradient descent, we argue that when using stochastic methods with variance reduction, such as SDCA, SAG or SVRG, as well as their variants, it could be beneficial to reuse previously used samples instead of fresh samples, even when fresh samples are available.

Distributed Multitask Learning

no code implementations2 Oct 2015 Jialei Wang, Mladen Kolar, Nathan Srebro

We present a communication-efficient estimator based on the debiased lasso and show that it is comparable with the optimal centralized method.

Multi-Task Learning

Bayesian optimization for materials design

1 code implementation3 Jun 2015 Peter I. Frazier, Jialei Wang

We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets.

Inference for Sparse Conditional Precision Matrices

no code implementations24 Dec 2014 Jialei Wang, Mladen Kolar

observations of a random vector $(X, Z)$, where $X$ is a high-dimensional vector and $Z$ is a low-dimensional index variable, we study the problem of estimating the conditional inverse covariance matrix $\Omega(z) = (E[(X-E[X \mid Z])(X-E[X \mid Z])^T \mid Z=z])^{-1}$ under the assumption that the set of non-zero elements is small and does not depend on the index variable.

Cannot find the paper you are looking for? You can Submit a new open access paper.