Search Results for author: Juyoung Yun

Found 9 papers, 0 papers with code

Robust Neural Pruning with Gradient Sampling Optimization for Residual Neural Networks

no code implementations26 Dec 2023 Juyoung Yun

In this study, we explore an innovative approach for neural network optimization, focusing on the application of gradient sampling techniques, similar to those in StochGradAdam, during the pruning process.

Continuous 16-bit Training: Accelerating 32-bit Pre-Trained Neural Networks

no code implementations30 Nov 2023 Juyoung Yun

In the field of deep learning, the prevalence of models initially trained with 32-bit precision is a testament to its robustness and accuracy.

StochGradAdam: Accelerating Neural Networks Training with Stochastic Gradient Sampling

no code implementations25 Oct 2023 Juyoung Yun

In the rapidly advancing domain of deep learning optimization, this paper unveils the StochGradAdam optimizer, a novel adaptation of the well-regarded Adam algorithm.

Image Classification

Linear Oscillation: A Novel Activation Function for Vision Transformer

no code implementations25 Aug 2023 Juyoung Yun

This concept of "controlled confusion" within network activations is posited to foster more robust learning, particularly in contexts that necessitate discerning subtle patterns.

Attribute

Stable Adam Optimization for 16-bit Neural Networks Training

no code implementations30 Jul 2023 Juyoung Yun

This not only disrupts the learning process but also poses significant challenges in deploying dependable models in real-world applications.

G-NM: A Group of Numerical Time Series Prediction Models

no code implementations20 Jun 2023 Juyoung Yun

Through the exploitation of the G-NM potential, we strive to advance the state-of-the-art in large-scale time series forecasting models.

Time Series Time Series Forecasting +1

Comparative Study: Standalone IEEE 16-bit Floating-Point for Image Classification

no code implementations18 May 2023 Juyoung Yun, Byungkon Kang, Francois Rameau, Zhoulai Fu

Contrary to literature that credits the success of noise-tolerated neural networks to regularization effects, our study-supported by a series of rigorous experiments-provides a quantitative explanation of why standalone IEEE 16-bit floating-point neural networks can perform on par with 32-bit and mixed-precision networks in various image classification tasks.

Image Classification

The Hidden Power of Pure 16-bit Floating-Point Neural Networks

no code implementations30 Jan 2023 Juyoung Yun, Byungkon Kang, Zhoulai Fu

Lowering the precision of neural networks from the prevalent 32-bit precision has long been considered harmful to performance, despite the gain in space and time.

Predictive Modeling of Coronal Hole Areas Using Long Short-Term Memory Networks

no code implementations17 Jan 2023 Juyoung Yun

In the era of space exploration, the implications of space weather have become increasingly evident.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.