no code implementations • 19 Apr 2021 • Shiyu Duan, Spencer Chang, Jose C. Principe
We call this statistic "sufficiently-labeled data" and prove its sufficiency and efficiency for finding the optimal hidden representations, on which competent classifier heads can be trained using as few as a single randomly-chosen fully-labeled example per class.
no code implementations • 9 Jan 2021 • Shiyu Duan, Jose C. Principe
This tutorial paper surveys provably optimal alternatives to end-to-end backpropagation (E2EBP) -- the de facto standard for training deep architectures.
1 code implementation • 24 May 2020 • Shiyu Duan, Huaijin Chen, Jinwei Gu
Moreover, they focus mostly on rate-distortion and tend to underperform in perception quality especially in low bitrate regime, and often disregard the performance of downstream computer vision algorithms, which is a fast-growing consumer group of compressed images in addition to human viewers.
1 code implementation • 12 May 2020 • Shiyu Duan, Shujian Yu, Jose Principe
By redefining the conventional notions of layers, we present an alternative view on finitely wide, fully trainable deep neural networks as stacked linear models in feature spaces, leading to a kernel machine interpretation.
no code implementations • ICLR 2019 • Shiyu Duan, Shujian Yu, Yun-Mei Chen, Jose Principe
Moreover, unlike backpropagation, which turns models into black boxes, the optimal hidden representation enjoys an intuitive geometric interpretation, making the dynamics of learning in a deep kernel network simple to understand.
1 code implementation • ICLR 2019 • Shiyu Duan, Shujian Yu, Yun-Mei Chen, Jose Principe
With this method, we obtain a counterpart of any given NN that is powered by kernel machines instead of neurons.