Search Results for author: Jiaming Yang

Found 10 papers, 3 papers with code

Precision Neural Network Quantization via Learnable Adaptive Modules

no code implementations24 Apr 2025 Wenqiang Zhou, Zhendong Yu, Xinyu Liu, Jiaming Yang, Rong Xiao, Tao Wang, Chenwei Tang, Jiancheng Lv

Quantization Aware Training (QAT) is a neural network quantization technique that compresses model size and improves operational efficiency while effectively maintaining model performance.

Computational Efficiency Quantization

Randomized Kaczmarz Methods with Beyond-Krylov Convergence

1 code implementation20 Jan 2025 Michał Dereziński, Deanna Needell, Elizaveta Rebrova, Jiaming Yang

In this paper, we introduce Kaczmarz++, an accelerated randomized block Kaczmarz algorithm that exploits outlying singular values in the input to attain a fast Krylov-style convergence.

subspace methods

Have ASkotch: A Neat Solution for Large-scale Kernel Ridge Regression

no code implementations14 Jul 2024 Pratik Rathore, Zachary Frangella, Jiaming Yang, Michał Dereziński, Madeleine Udell

ASkotch outperforms state-of-the-art KRR solvers on a testbed of 23 large-scale KRR regression and classification tasks derived from a wide range of application domains, demonstrating the superiority of full KRR over inducing points KRR.

Computational chemistry Point Processes +1

Faster Linear Systems and Matrix Norm Approximation via Multi-level Sketched Preconditioning

no code implementations9 May 2024 Michał Dereziński, Christopher Musco, Jiaming Yang

Our methods are based on constructing a low-rank Nystr\"om approximation to $A$ using sparse random matrix sketching.

HERTA: A High-Efficiency and Rigorous Training Algorithm for Unfolded Graph Neural Networks

no code implementations26 Mar 2024 Yongyi Yang, Jiaming Yang, Wei Hu, Michał Dereziński

In this paper, we propose HERTA: a High-Efficiency and Rigorous Training Algorithm for Unfolded GNNs that accelerates the whole training process, achieving a nearly-linear time worst-case training guarantee.

Solving Dense Linear Systems Faster Than via Preconditioning

no code implementations14 Dec 2023 Michał Dereziński, Jiaming Yang

We give a stochastic optimization algorithm that solves a dense $n\times n$ real-valued linear system $Ax=b$, returning $\tilde x$ such that $\|A\tilde x-b\|\leq \epsilon\|b\|$ in time: $$\tilde O((n^2+nk^{\omega-1})\log1/\epsilon),$$ where $k$ is the number of singular values of $A$ larger than $O(1)$ times its smallest positive singular value, $\omega < 2. 372$ is the matrix multiplication exponent, and $\tilde O$ hides a poly-logarithmic in $n$ factor.

Stochastic Optimization

Federated Adversarial Learning: A Framework with Convergence Analysis

no code implementations7 Aug 2022 Xiaoxiao Li, Zhao Song, Jiaming Yang

Unlike the convergence analysis in classical centralized training that relies on the gradient direction, it is significantly harder to analyze the convergence in FAL for three reasons: 1) the complexity of min-max optimization, 2) model not updating in the gradient direction due to the multi-local updates on the client-side before aggregation and 3) inter-client heterogeneity.

Federated Learning

Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models

1 code implementation ICLR 2022 Tri Dao, Beidi Chen, Kaizhao Liang, Jiaming Yang, Zhao Song, Atri Rudra, Christopher Ré

To address this, our main insight is to optimize over a continuous superset of sparse matrices with a fixed structure known as products of butterfly matrices.

Language Modeling Language Modelling

Provable Federated Adversarial Learning via Min-max Optimization

no code implementations29 Sep 2021 Xiaoxiao Li, Zhao Song, Jiaming Yang

Unlike the convergence analysis in centralized training that relies on the gradient direction, it is significantly harder to analyze the convergence in FAL for two reasons: 1) the complexity of min-max optimization, and 2) model not updating in the gradient direction due to the multi-local updates on the client-side before aggregation.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.