Search Results for author: Jiacheng Zhuo

Found 10 papers, 3 papers with code

Improving Computational Complexity in Statistical Models with Second-Order Information

no code implementations9 Feb 2022 Tongzheng Ren, Jiacheng Zhuo, Sujay Sanghavi, Nhat Ho

This computational complexity is cheaper than that of the fixed step-size gradient descent algorithm, which is of the order $\mathcal{O}(n^{\tau})$ for some $\tau > 1$, to reach the same statistical radius.

On the computational and statistical complexity of over-parameterized matrix sensing

no code implementations27 Jan 2021 Jiacheng Zhuo, Jeongyeol Kwon, Nhat Ho, Constantine Caramanis

We consider solving the low rank matrix sensing problem with Factorized Gradient Descend (FGD) method when the true rank is unknown and over-specified, which we refer to as over-parameterized matrix sensing.

Predicting What You Already Know Helps: Provable Self-Supervised Learning

no code implementations NeurIPS 2021 Jason D. Lee, Qi Lei, Nikunj Saunshi, Jiacheng Zhuo

Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data to learn useful semantic representations.

Representation Learning Self-Supervised Learning

Communication-Efficient Asynchronous Stochastic Frank-Wolfe over Nuclear-norm Balls

no code implementations17 Oct 2019 Jiacheng Zhuo, Qi Lei, Alexandros G. Dimakis, Constantine Caramanis

Large-scale machine learning training suffers from two prior challenges, specifically for nuclear-norm constrained problems with distributed systems: the synchronization slowdown due to the straggling workers, and high communication costs.

BIG-bench Machine Learning

Primal-Dual Block Frank-Wolfe

1 code implementation6 Jun 2019 Qi Lei, Jiacheng Zhuo, Constantine Caramanis, Inderjit S. Dhillon, Alexandros G. Dimakis

We propose a variant of the Frank-Wolfe algorithm for solving a class of sparse/low-rank optimization problems.

General Classification Multi-class Classification +1

Fast Stochastic Variance Reduced Gradient Method with Momentum Acceleration for Machine Learning

no code implementations23 Mar 2017 Fanhua Shang, Yuanyuan Liu, James Cheng, Jiacheng Zhuo

Recently, research on accelerated stochastic gradient descent methods (e. g., SVRG) has made exciting progress (e. g., linear convergence for strongly convex problems).

BIG-bench Machine Learning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.