Search Results for author: Yue Niu

Found 9 papers, 2 papers with code

Edge Private Graph Neural Networks with Singular Value Perturbation

no code implementations16 Mar 2024 Tingting Tang, Yue Niu, Salman Avestimehr, Murali Annavaram

Eclipse adds noise to the low-rank singular values instead of the entire graph, thereby preserving the graph privacy while still maintaining enough of the graph structure to maintain model utility.

Privacy Preserving

Ethos: Rectifying Language Models in Orthogonal Parameter Space

no code implementations13 Mar 2024 Lei Gao, Yue Niu, Tingting Tang, Salman Avestimehr, Murali Annavaram

Evaluations show Ethos is more effective in removing undesired knowledge and maintaining the overall model performance compared to current task arithmetic methods.

Memorization

ATP: Enabling Fast LLM Serving via Attention on Top Principal Keys

no code implementations1 Mar 2024 Yue Niu, Saurav Prakash, Salman Avestimehr

In particular, ATP barely loses accuracy with only $1/2$ principal keys, and only incurs around $2\%$ accuracy drops with $1/4$ principal keys.

All Rivers Run to the Sea: Private Learning with Asymmetric Flows

no code implementations5 Dec 2023 Yue Niu, Ramy E. Ali, Saurav Prakash, Salman Avestimehr

The main part flows into a small model while the residuals are offloaded to a large model.

Quantization

mL-BFGS: A Momentum-based L-BFGS for Distributed Large-Scale Neural Network Optimization

no code implementations25 Jul 2023 Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

Quasi-Newton methods still face significant challenges in training large-scale neural networks due to additional compute costs in the Hessian related computations and instability issues in stochastic training.

Stochastic Optimization

Federated Learning of Large Models at the Edge via Principal Sub-Model Training

1 code implementation28 Aug 2022 Yue Niu, Saurav Prakash, Souvik Kundu, Sunwoo Lee, Salman Avestimehr

However, the heterogeneous-client setting requires some clients to train full model, which is not aligned with the resource-constrained setting; while the latter ones break privacy promises in FL when sharing intermediate representations or labels with the server.

Federated Learning

Lottery Aware Sparsity Hunting: Enabling Federated Learning on Resource-Limited Edge

1 code implementation27 Aug 2022 Sara Babakniya, Souvik Kundu, Saurav Prakash, Yue Niu, Salman Avestimehr

A possible solution to this problem is to utilize off-the-shelf sparse learning algorithms at the clients to meet their resource budget.

Federated Learning Model Compression +1

SLIM-QN: A Stochastic, Light, Momentumized Quasi-Newton Optimizer for Deep Neural Networks

no code implementations29 Sep 2021 Yue Niu, Zalan Fabian, Sunwoo Lee, Mahdi Soltanolkotabi, Salman Avestimehr

SLIM-QN addresses two key barriers in existing second-order methods for large-scale DNNs: 1) the high computational cost of obtaining the Hessian matrix and its inverse in every iteration (e. g. KFAC); 2) convergence instability due to stochastic training (e. g. L-BFGS).

Second-order methods

SPEC2: SPECtral SParsE CNN Accelerator on FPGAs

no code implementations16 Oct 2019 Yue Niu, Hanqing Zeng, Ajitesh Srivastava, Kartik Lakhotia, Rajgopal Kannan, Yanzhi Wang, Viktor Prasanna

On the other hand, weight pruning techniques address the redundancy in model parameters by converting dense convolutional kernels into sparse ones.

Cannot find the paper you are looking for? You can Submit a new open access paper.