no code implementations • 18 May 2023 • Juyoung Yun, Byungkon Kang, Francois Rameau, Zhoulai Fu
Contrary to literature that credits the success of noise-tolerated neural networks to regularization effects, our study-supported by a series of rigorous experiments-provides a quantitative explanation of why standalone IEEE 16-bit floating-point neural networks can perform on par with 32-bit and mixed-precision networks in various image classification tasks.
no code implementations • 30 Jan 2023 • Juyoung Yun, Byungkon Kang, Zhoulai Fu
Lowering the precision of neural networks from the prevalent 32-bit precision has long been considered harmful to performance, despite the gain in space and time.
no code implementations • 14 Apr 2022 • Jihoon Ryoo, Byungkon Kang, Dongyeob Lee, Seunghyeon Kim, YoungHo Kim
To do so, it goes through a five-step method: object detection, foreground subtraction, K-means clustering, percentage estimation, and counting.
1 code implementation • 6 Mar 2019 • Namhyuk Ahn, Byungkon Kang, Kyung-Ah Sohn
Recent progress in deep learning-based models has improved photo-realistic (or perceptual) single-image super-resolution significantly.
no code implementations • 28 May 2018 • Namhyuk Ahn, Byungkon Kang, Kyung-Ah Sohn
Image distortion classification and detection is an important task in many applications.
3 code implementations • ECCV 2018 • Namhyuk Ahn, Byungkon Kang, Kyung-Ah Sohn
In recent years, deep learning methods have been successfully applied to single-image super-resolution tasks.
Ranked #17 on Image Super-Resolution on BSD100 - 2x upscaling
no code implementations • NeurIPS 2013 • Byungkon Kang
In addition, we show that this framework can be extended to sampling from cardinality-constrained DPPs.