no code implementations • 30 Sep 2022 • Tianxiang Gao, Hongyang Gao
We show that global convergence is guaranteed, even if only the implicit layer is trained.
no code implementations • 16 May 2022 • Tianxiang Gao, Hongyang Gao
Implicit deep learning has recently become popular in the machine learning community since these implicit models can achieve competitive performance with state-of-the-art deep networks while using significantly less memory and computational resources.
no code implementations • ICLR 2022 • Tianxiang Gao, Hailiang Liu, Jia Liu, Hridesh Rajan, Hongyang Gao
Implicit deep learning has received increasing attention recently due to the fact that it generalizes the recursive prediction rules of many commonly used neural network architectures.
no code implementations • 4 May 2021 • Xiaocong Du, Bhargav Bhushanam, Jiecao Yu, Dhruv Choudhary, Tianxiang Gao, Sherman Wong, Louis Feng, Jongsoo Park, Yu Cao, Arun Kejariwal
Our method leverages structured sparsification to reduce computational cost without hurting the model capacity at the end of offline training so that a full-size model is available in the recurring training stage to learn new data in real-time.
no code implementations • 15 Jan 2020 • Tianxiang Gao, Songtao Lu, Jia Liu, Chris Chu
Further, we show that the iteration complexity of the proposed method is $O(n\varepsilon^{-2})$ to achieve $\epsilon$-stationary point, where $n$ is the number of blocks of coordinates.
no code implementations • 16 Dec 2019 • Tianxiang Gao, Songtao Lu, Jia Liu, Chris Chu
In the applications of signal processing and data analytics, there is a wide class of non-convex problems whose objective function is freed from the common global Lipschitz continuous gradient assumption (e. g., the nonnegative matrix factorization (NMF) problem).
no code implementations • 4 Feb 2019 • Yang Li, Tianxiang Gao, Junier B. Oliva
In this work, we propose to learn a generative model using both learned features (through a latent space) and memories (through neighbors).
no code implementations • 25 Feb 2018 • Tianxiang Gao, Chris Chu
We propose a novel distributed algorithm, called \textit{distributed incremental block coordinate descent} (DID), to solve the problem.
no code implementations • 30 Mar 2016 • Tianxiang Gao, Vladimir Jojic
The degrees of freedom in deep networks are dramatically smaller than the number of parameters, in some real datasets several orders of magnitude.