Search Results for author: Doo Seok Jeong

Found 4 papers, 2 papers with code

Markov chain Hebbian learning algorithm with ternary synaptic units

no code implementations23 Nov 2017 Guhyun Kim, Vladimir Kornijcuk, Dohun Kim, Inho Kim, Jaewook Kim, Hyo Cheon Woo, Ji Hun Kim, Cheol Seong Hwang, Doo Seok Jeong

In spite of remarkable progress in machine learning techniques, the state-of-the-art machine learning algorithms often keep machines from real-time learning (online learning) due in part to computational complexity in parameter optimization.

BIG-bench Machine Learning Handwritten Digit Recognition +1

Simplified calcium signaling cascade for synaptic plasticity

no code implementations26 Nov 2019 Vladimir Kornijcuk, Dohun Kim, Guhyun Kim, Doo Seok Jeong

We propose a model for synaptic plasticity based on a calcium signaling cascade.

Training Spiking Neural Networks Using Lessons From Deep Learning

3 code implementations27 Sep 2021 Jason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, Wei D. Lu

This paper serves as a tutorial and perspective showing how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks.

CBP: Backpropagation with constraint on weight precision using a pseudo-Lagrange multiplier method

1 code implementation NeurIPS 2021 Guhyun Kim, Doo Seok Jeong

Backward propagation of errors (backpropagation) is a method to minimize objective functions (e. g., loss functions) of deep neural networks by identifying optimal sets of weights and biases.

Cannot find the paper you are looking for? You can Submit a new open access paper.