Search Results for author: Shanshan Wu

Found 10 papers, 7 papers with code

Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning

no code implementations6 Oct 2023 Liam Collins, Shanshan Wu, Sewoong Oh, Khe Chai Sim

In many applications of federated learning (FL), clients desire models that are personalized using their local data, yet are also robust in the sense that they retain general global knowledge.

Benchmarking Federated Learning +2

Federated Reconstruction: Partially Local Federated Learning

3 code implementations NeurIPS 2021 Karan Singhal, Hakim Sidahmed, Zachary Garrett, Shanshan Wu, Keith Rush, Sushant Prakash

We also describe the successful deployment of this approach at scale for federated collaborative filtering in a mobile keyboard application.

Collaborative Filtering Federated Learning +1

Learning Distributions Generated by One-Layer ReLU Networks

1 code implementation NeurIPS 2019 Shanshan Wu, Alexandros G. Dimakis, Sujay Sanghavi

We give a simple algorithm to estimate the parameters (i. e., the weight matrix and bias vector of the ReLU neural network) up to an error $\epsilon||W||_F$ using $\tilde{O}(1/\epsilon^2)$ samples and $\tilde{O}(d^2/\epsilon^2)$ time (log factors are ignored for simplicity).

Sparse Logistic Regression Learns All Discrete Pairwise Graphical Models

1 code implementation NeurIPS 2019 Shanshan Wu, Sujay Sanghavi, Alexandros G. Dimakis

We show that this algorithm can recover any arbitrary discrete pairwise graphical model, and also characterize its sample complexity as a function of model width, alphabet size, edge parameter accuracy, and the number of variables.


Learning a Compressed Sensing Measurement Matrix via Gradient Unrolling

1 code implementation26 Jun 2018 Shanshan Wu, Alexandros G. Dimakis, Sujay Sanghavi, Felix X. Yu, Daniel Holtmann-Rice, Dmitry Storcheus, Afshin Rostamizadeh, Sanjiv Kumar

Our experiments show that there is indeed additional structure beyond sparsity in the real datasets; our method is able to discover it and exploit it to create excellent reconstructions with fewer measurements (by a factor of 1. 1-3x) compared to the previous state-of-the-art methods.

Extreme Multi-Label Classification Multi-Label Learning +1

Leveraging Sparsity for Efficient Submodular Data Summarization

no code implementations NeurIPS 2016 Erik M. Lindgren, Shanshan Wu, Alexandros G. Dimakis

The facility location problem is widely used for summarizing large datasets and has additional applications in sensor placement, image retrieval, and clustering.

Clustering Data Summarization +2

Single Pass PCA of Matrix Products

1 code implementation NeurIPS 2016 Shanshan Wu, Srinadh Bhojanapalli, Sujay Sanghavi, Alexandros G. Dimakis

In this paper we present a new algorithm for computing a low rank approximation of the product $A^TB$ by taking only a single pass of the two matrices $A$ and $B$.

Cannot find the paper you are looking for? You can Submit a new open access paper.