Search Results for author: Shunhua Jiang

Found 5 papers, 2 papers with code

The Complexity of Dynamic Least-Squares Regression

1 code implementation1 Jan 2022 Shunhua Jiang, Binghui Peng, Omri Weinstein

We settle the complexity of dynamic least-squares regression (LSR), where rows and labels $(\mathbf{A}^{(t)}, \mathbf{b}^{(t)})$ can be adaptively inserted and/or deleted, and the goal is to efficiently maintain an $\epsilon$-approximate solution to $\min_{\mathbf{x}^{(t)}} \| \mathbf{A}^{(t)} \mathbf{x}^{(t)} - \mathbf{b}^{(t)} \|_2$ for all $t\in [T]$.

regression

Fast Graph Neural Tangent Kernel via Kronecker Sketching

no code implementations4 Dec 2021 Shunhua Jiang, Yunze Man, Zhao Song, Zheng Yu, Danyang Zhuo

Given a kernel matrix of $n$ graphs, using sketching in solving kernel regression can reduce the running time to $o(n^3)$.

regression

Solving SDP Faster: A Robust IPM Framework and Efficient Implementation

no code implementations20 Jan 2021 Baihe Huang, Shunhua Jiang, Zhao Song, Runzhou Tao

This paper introduces a new robust interior point method analysis for semidefinite programming (SDP).

Optimization and Control Data Structures and Algorithms

Graph Neural Network Acceleration via Matrix Dimension Reduction

no code implementations1 Jan 2021 Shunhua Jiang, Yunze Man, Zhao Song, Danyang Zhuo

Theoretically, we present two techniques to speed up GNTK training while preserving the generalization error: (1) We use a novel matrix decoupling method to reduce matrix dimensions during the kernel solving.

Dimensionality Reduction

Learning Gradient Descent: Better Generalization and Longer Horizons

2 code implementations ICML 2017 Kaifeng Lv, Shunhua Jiang, Jian Li

Training deep neural networks is a highly nontrivial task, involving carefully selecting appropriate training algorithms, scheduling step sizes and tuning other hyperparameters.

Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.