Search Results for author: Albert No

Found 7 papers, 6 papers with code

Fully Quantized Always-on Face Detector Considering Mobile Image Sensors

no code implementations2 Nov 2023 Haechang Lee, Wongi Jeong, Dongil Ryu, Hyunwoo Je, Albert No, Kijeong Kim, Se Young Chun

In this study, we aim to bridge the gap by exploring extremely low-bit lightweight face detectors, focusing on the always-on face detection scenario for mobile image sensor applications.

Face Detection

PyNET-QxQ: An Efficient PyNET Variant for QxQ Bayer Pattern Demosaicing in CMOS Image Sensors

1 code implementation8 Mar 2022 Minhyeok Cho, Haechang Lee, Hyunwoo Je, Kijeong Kim, Dongil Ryu, Albert No

Additionally, modern mobile cameras employ non-Bayer color filter arrays (CFA) such as Quad Bayer, Nona Bayer, and QxQ Bayer to enhance image quality, yet most existing deep learning-based ISP (or demosaicing) models focus primarily on standard Bayer CFAs.

Demosaicking Knowledge Distillation

Neural Tangent Kernel Analysis of Deep Narrow Neural Networks

1 code implementation7 Feb 2022 Jongmin Lee, Joo Young Choi, Ernest K. Ryu, Albert No

The tremendous recent progress in analyzing the training dynamics of overparameterized neural networks has primarily focused on wide networks and therefore does not sufficiently address the role of depth in deep learning.

Prune Your Model Before Distill It

1 code implementation30 Sep 2021 Jinhyuk Park, Albert No

Recent results suggest that the student-friendly teacher is more appropriate to distill since it provides more transferable knowledge.

Knowledge Distillation Neural Network Compression

An Information-Theoretic Justification for Model Pruning

1 code implementation16 Feb 2021 Berivan Isik, Tsachy Weissman, Albert No

We study the neural network (NN) compression problem, viewing the tension between the compression ratio and NN performance through the lens of rate-distortion theory.

Data Compression Model Compression

WGAN with an Infinitely Wide Generator Has No Spurious Stationary Points

1 code implementation15 Feb 2021 Albert No, Taeho Yoon, Sehyun Kwon, Ernest K. Ryu

Generative adversarial networks (GAN) are a widely used class of deep generative models, but their minimax training dynamics are not understood very well.

Cannot find the paper you are looking for? You can Submit a new open access paper.