Search Results for author: Shusen Wang

Found 24 papers, 3 papers with code

Privacy-Preserving Distributed SVD via Federated Power

no code implementations1 Mar 2021 Xiao Guo, Xiang Li, Xiangyu Chang, Shusen Wang, Zhihua Zhang

The low communication and computation power of such devices, and the possible privacy breaches of users' sensitive data make the computation of SVD challenging.

Federated Learning

Communication-Efficient Distributed SVD via Local Power Iterations

no code implementations19 Feb 2020 Xiang Li, Shusen Wang, Kun Chen, Zhihua Zhang

As a practical surrogate of OPT, sign-fixing, which uses a diagonal matrix with $\pm 1$ entries as weights, has better computation complexity and stability in experiments.

Distributed Computing

Fast Generalized Matrix Regression with Applications in Machine Learning

no code implementations27 Dec 2019 Haishan Ye, Shusen Wang, Zhihua Zhang, Tong Zhang

Fast matrix algorithms have become the fundamental tools of machine learning in big data era.

Graph Message Passing with Cross-location Attentions for Long-term ILI Prediction

no code implementations21 Dec 2019 Songgaojun Deng, Shusen Wang, Huzefa Rangwala, Lijing Wang, Yue Ning

Forecasting influenza-like illness (ILI) is of prime importance to epidemiologists and health-care providers.

Time Series

Communication-Efficient Local Decentralized SGD Methods

no code implementations21 Oct 2019 Xiang Li, Wenhao Yang, Shusen Wang, Zhihua Zhang

Recently, the technique of local updates is a powerful tool in centralized settings to improve communication efficiency via periodical communication.

Distributed Computing

Simple and Almost Assumption-Free Out-of-Sample Bound for Random Feature Mapping

no code implementations24 Sep 2019 Shusen Wang

On the one hand, our theories are based on weak and valid assumptions.

Matrix Sketching for Secure Collaborative Machine Learning

no code implementations24 Sep 2019 Mengjiao Zhang, Shusen Wang

Collaborative learning allows participants to jointly train a model without data sharing.

On the Convergence of FedAvg on Non-IID Data

1 code implementation ICLR 2020 Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, Zhihua Zhang

In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of $\mathcal{O}(\frac{1}{T})$ for strongly convex and smooth problems, where $T$ is the number of SGDs.

Edge-computing Federated Learning

Do Subsampled Newton Methods Work for High-Dimensional Data?

no code implementations13 Feb 2019 Xiang Li, Shusen Wang, Zhihua Zhang

Subsampled Newton methods approximate Hessian matrices through subsampling techniques, alleviating the cost of forming Hessian matrices but using sufficient curvature information.

Distributed Optimization

OverSketch: Approximate Matrix Multiplication for the Cloud

1 code implementation6 Nov 2018 Vipul Gupta, Shusen Wang, Thomas Courtade, Kannan Ramchandran

We propose OverSketch, an approximate algorithm for distributed matrix multiplication in serverless computing.

Distributed, Parallel, and Cluster Computing Information Theory Information Theory

Error Estimation for Randomized Least-Squares Algorithms via the Bootstrap

no code implementations ICML 2018 Miles E. Lopes, Shusen Wang, Michael W. Mahoney

As a more practical alternative, we propose a bootstrap method to compute a posteriori error estimates for randomized LS algorithms.

Efficient Data-Driven Geologic Feature Detection from Pre-stack Seismic Measurements using Randomized Machine-Learning Algorithm

no code implementations11 Oct 2017 Youzuo Lin, Shusen Wang, Jayaraman Thiagarajan, George Guthrie, David Coblentz

We employ a data reduction technique in combination with the conventional kernel ridge regression method to improve the computational efficiency and reduce memory usage.

GIANT: Globally Improved Approximate Newton Method for Distributed Optimization

no code implementations NeurIPS 2018 Shusen Wang, Farbod Roosta-Khorasani, Peng Xu, Michael W. Mahoney

For distributed computing environment, we consider the empirical risk minimization problem and propose a distributed and communication-efficient Newton-type optimization method.

Distributed Computing Distributed Optimization

A Bootstrap Method for Error Estimation in Randomized Matrix Multiplication

no code implementations6 Aug 2017 Miles E. Lopes, Shusen Wang, Michael W. Mahoney

In recent years, randomized methods for numerical linear algebra have received growing interest as a general approach to large-scale problems.

Dimensionality Reduction

Scalable Kernel K-Means Clustering with Nystrom Approximation: Relative-Error Bounds

no code implementations9 Jun 2017 Shusen Wang, Alex Gittens, Michael W. Mahoney

This work analyzes the application of this paradigm to kernel $k$-means clustering, and shows that applying the linear $k$-means clustering algorithm to $\frac{k}{\epsilon} (1 + o(1))$ features constructed using a so-called rank-restricted Nystr\"om approximation results in cluster assignments that satisfy a $1 + \epsilon$ approximation ratio in terms of the kernel $k$-means cost function, relative to the guarantee provided by the same algorithm without the use of the Nystr\"om method.

A Practical Guide to Randomized Matrix Computations with MATLAB Implementations

1 code implementation28 May 2015 Shusen Wang

In recent years, a bunch of randomized algorithms have been devised to make matrix computations more scalable.

Towards More Efficient SPSD Matrix Approximation and CUR Matrix Decomposition

no code implementations29 Mar 2015 Shusen Wang, Zhihua Zhang, Tong Zhang

The Nystr\"om method is a special instance of our fast model and is approximation to the prototype model.

SPSD Matrix Approximation vis Column Selection: Theories, Algorithms, and Extensions

no code implementations22 Jun 2014 Shusen Wang, Luo Luo, Zhihua Zhang

In this paper we conduct in-depth studies of an SPSD matrix approximation model and establish strong relative-error bounds.

Efficient Algorithms and Error Analysis for the Modified Nystrom Method

no code implementations1 Apr 2014 Shusen Wang, Zhihua Zhang

Recently, a variant of the Nystr\"om method called the modified Nystr\"om method has demonstrated significant improvement over the standard Nystr\"om method in approximation accuracy, both theoretically and empirically.

Sharpened Error Bounds for Random Sampling Based $\ell_2$ Regression

no code implementations30 Mar 2014 Shusen Wang

Given a data matrix $X \in R^{n\times d}$ and a response vector $y \in R^{n}$, suppose $n>d$, it costs $O(n d^2)$ time and $O(n d)$ space to solve the least squares regression (LSR) problem.

Improving CUR Matrix Decomposition and the Nyström Approximation via Adaptive Sampling

no code implementations18 Mar 2013 Shusen Wang, Zhihua Zhang

The CUR matrix decomposition and the Nystr\"{o}m approximation are two important low-rank matrix approximation techniques.

A Scalable CUR Matrix Decomposition Algorithm: Lower Time Complexity and Tighter Bound

no code implementations NeurIPS 2012 Shusen Wang, Zhihua Zhang

The CUR matrix decomposition is an important extension of Nyström approximation to a general matrix.

Cannot find the paper you are looking for? You can Submit a new open access paper.