Search Results for author: Junhyung Lyle Kim

Found 8 papers, 1 papers with code

On the Error-Propagation of Inexact Deflation for Principal Component Analysis

no code implementations6 Oct 2023 Fangshuo Liao, Junhyung Lyle Kim, Cruz Barnum, Anastasios Kyrillidis

Principal Component Analysis (PCA) is a popular tool in data analysis, especially when the data is high-dimensional.

Adaptive Federated Learning with Auto-Tuned Clients

no code implementations19 Jun 2023 Junhyung Lyle Kim, Mohammad Taha Toghani, César A. Uribe, Anastasios Kyrillidis

Federated learning (FL) is a distributed machine learning framework where the global model of a central server is trained via multiple collaborative steps by participating clients without sharing their data.

Federated Learning

When is Momentum Extragradient Optimal? A Polynomial-Based Analysis

no code implementations9 Nov 2022 Junhyung Lyle Kim, Gauthier Gidel, Anastasios Kyrillidis, Fabian Pedregosa

The extragradient method has gained popularity due to its robust convergence properties for differentiable games.

Local Stochastic Factored Gradient Descent for Distributed Quantum State Tomography

no code implementations22 Mar 2022 Junhyung Lyle Kim, Mohammad Taha Toghani, César A. Uribe, Anastasios Kyrillidis

We propose a distributed Quantum State Tomography (QST) protocol, named Local Stochastic Factored Gradient Descent (Local SFGD), to learn the low-rank factor of a density matrix over a set of local machines.

Quantum State Tomography

Convergence and Stability of the Stochastic Proximal Point Algorithm with Momentum

no code implementations11 Nov 2021 Junhyung Lyle Kim, Panos Toulis, Anastasios Kyrillidis

Stochastic gradient descent with momentum (SGDM) is the dominant algorithm in many optimization scenarios, including convex optimization instances and non-convex neural network training.

How much pre-training is enough to discover a good subnetwork?

no code implementations31 Jul 2021 Cameron R. Wolfe, Fangshuo Liao, Qihan Wang, Junhyung Lyle Kim, Anastasios Kyrillidis

Aiming to mathematically analyze the amount of dense network pre-training needed for a pruned network to perform well, we discover a simple theoretical bound in the number of gradient descent pre-training iterations on a two-layer, fully-connected network, beyond which pruning via greedy forward selection [61] yields a subnetwork that achieves good training error.

Network Pruning

Momentum-inspired Low-Rank Coordinate Descent for Diagonally Constrained SDPs

no code implementations16 Jun 2021 Junhyung Lyle Kim, Jose Antonio Lara Benitez, Mohammad Taha Toghani, Cameron Wolfe, Zhiwei Zhang, Anastasios Kyrillidis

We present a novel, practical, and provable approach for solving diagonally constrained semi-definite programming (SDP) problems at scale using accelerated non-convex programming.

Fast quantum state reconstruction via accelerated non-convex programming

1 code implementation14 Apr 2021 Junhyung Lyle Kim, George Kollias, Amir Kalev, Ken X. Wei, Anastasios Kyrillidis

Despite being a non-convex method, \texttt{MiFGD} converges \emph{provably} close to the true density matrix at an accelerated linear rate, in the absence of experimental and statistical noise, and under common assumptions.

Cannot find the paper you are looking for? You can Submit a new open access paper.