Search Results for author: Shiyu Duan

Found 6 papers, 3 papers with code

Labels, Information, and Computation: Efficient Learning Using Sufficient Labels

no code implementations19 Apr 2021 Shiyu Duan, Spencer Chang, Jose C. Principe

We call this statistic "sufficiently-labeled data" and prove its sufficiency and efficiency for finding the optimal hidden representations, on which competent classifier heads can be trained using as few as a single randomly-chosen fully-labeled example per class.

Privacy Preserving

Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods

no code implementations9 Jan 2021 Shiyu Duan, Jose C. Principe

This tutorial paper surveys provably optimal alternatives to end-to-end backpropagation (E2EBP) -- the de facto standard for training deep architectures.

JPD-SE: High-Level Semantics for Joint Perception-Distortion Enhancement in Image Compression

1 code implementation24 May 2020 Shiyu Duan, Huaijin Chen, Jinwei Gu

Moreover, they focus mostly on rate-distortion and tend to underperform in perception quality especially in low bitrate regime, and often disregard the performance of downstream computer vision algorithms, which is a fast-growing consumer group of compressed images in addition to human viewers.

Image Compression

Modularizing Deep Learning via Pairwise Learning With Kernels

1 code implementation12 May 2020 Shiyu Duan, Shujian Yu, Jose Principe

By redefining the conventional notions of layers, we present an alternative view on finitely wide, fully trainable deep neural networks as stacked linear models in feature spaces, leading to a kernel machine interpretation.

Binary Classification General Classification +1

Learning Backpropagation-Free Deep Architectures with Kernels

no code implementations ICLR 2019 Shiyu Duan, Shujian Yu, Yun-Mei Chen, Jose Principe

Moreover, unlike backpropagation, which turns models into black boxes, the optimal hidden representation enjoys an intuitive geometric interpretation, making the dynamics of learning in a deep kernel network simple to understand.

On Kernel Method-Based Connectionist Models and Supervised Deep Learning Without Backpropagation

1 code implementation ICLR 2019 Shiyu Duan, Shujian Yu, Yun-Mei Chen, Jose Principe

With this method, we obtain a counterpart of any given NN that is powered by kernel machines instead of neurons.

Cannot find the paper you are looking for? You can Submit a new open access paper.