Search Results for author: Seongsik Park

Found 19 papers, 2 papers with code

Near-Data Processing for Differentiable Machine Learning Models

no code implementations6 Oct 2016 Hyeokjun Choe, Seil Lee, Hyunha Nam, Seongsik Park, Seijoon Kim, Eui-Young Chung, Sungroh Yoon

The second is the popularity of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing.

BIG-bench Machine Learning

An Efficient Approach to Boosting Performance of Deep Spiking Network Training

no code implementations8 Nov 2016 Seongsik Park, Sang-gil Lee, Hyunha Nam, Sungroh Yoon

In order to eliminate this workaround, recently proposed is a new class of SNN named deep spiking networks (DSNs), which can be trained directly (without a mapping from conventional deep networks) by error backpropagation with stochastic gradient descent.

Quantized Memory-Augmented Neural Networks

no code implementations10 Nov 2017 Seongsik Park, Seijoon Kim, Seil Lee, Ho Bae, Sungroh Yoon

In this paper, we identify memory addressing (specifically, content-based addressing) as the main reason for the performance degradation and propose a robust quantization method for MANNs to address the challenge.

Quantization

Fast and Efficient Information Transmission with Burst Spikes in Deep Spiking Neural Networks

no code implementations10 Sep 2018 Seongsik Park, Seijoon Kim, Hyeokjun Choe, Sungroh Yoon

The spiking neural networks (SNNs) are considered as one of the most promising artificial neural networks due to their energy efficient computing capability.

Image Classification

Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection

no code implementations12 Mar 2019 Seijoon Kim, Seongsik Park, Byunggook Na, Sungroh Yoon

Over the past decade, deep neural networks (DNNs) have demonstrated remarkable performance in a variety of applications.

Image Classification object-detection +1

T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding

no code implementations26 Mar 2020 Seongsik Park, Seijoon Kim, Byunggook Na, Sungroh Yoon

Spiking neural networks (SNNs) have gained considerable interest due to their energy-efficient characteristics, yet lack of a scalable training algorithm has restricted their applicability in practical machine learning problems.

Dual Pointer Network for Fast Extraction of Multiple Relations in a Sentence

no code implementations5 Mar 2021 Seongsik Park, Harksoo Kim

The proposed model finds n-to-1 subject-object relations using a forward object decoder.

 Ranked #1 on Relation Extraction on ACE 2005 (Relation classification F1 metric)

Object Relation +2

Noise-Robust Deep Spiking Neural Networks with Temporal Information

no code implementations22 Apr 2021 Seongsik Park, Dongjin Lee, Sungroh Yoon

Spiking neural networks (SNNs) have emerged as energy-efficient neural networks with temporal information.

Training Energy-Efficient Deep Spiking Neural Networks with Time-to-First-Spike Coding

no code implementations4 Jun 2021 Seongsik Park, Sungroh Yoon

With TTFS coding, each neuron generates one spike at most, which leads to a significant improvement in energy efficiency.

Energy-efficient Knowledge Distillation for Spiking Neural Networks

no code implementations14 Jun 2021 Dongjin Lee, Seongsik Park, Jongwan Kim, Wuhyeong Doh, Sungroh Yoon

On MNIST dataset, our proposed student SNN achieves up to 0. 09% higher accuracy and produces 65% less spikes compared to the student SNN trained with conventional knowledge distillation method.

Knowledge Distillation Model Compression +1

Improving Sentence-Level Relation Extraction through Curriculum Learning

no code implementations20 Jul 2021 Seongsik Park, Harksoo Kim

Sentence-level relation extraction mainly aims to classify the relation between two entities in a sentence.

Relation Relation Extraction +1

AutoSNN: Towards Energy-Efficient Spiking Neural Networks

1 code implementation30 Jan 2022 Byunggook Na, Jisoo Mok, Seongsik Park, Dongjin Lee, Hyeokjun Choe, Sungroh Yoon

We investigate the design choices used in the previous studies in terms of the accuracy and number of spikes and figure out that they are not best-suited for SNNs.

Neural Architecture Search

SimFLE: Simple Facial Landmark Encoding for Self-Supervised Facial Expression Recognition in the Wild

1 code implementation14 Mar 2023 Jiyong Moon, Seongsik Park

One of the key issues in facial expression recognition in the wild (FER-W) is that curating large-scale labeled facial images is challenging due to the inherent complexity and ambiguity of facial images.

Face Alignment Facial Expression Recognition +1

Gradient Scaling on Deep Spiking Neural Networks with Spike-Dependent Local Information

no code implementations1 Aug 2023 Seongsik Park, Jeonghee Jo, Jongkil Park, YeonJoo Jeong, Jaewook Kim, Suyoun Lee, Joon Young Kwak, Inho Kim, Jong-Keuk Park, Kyeong Seok Lee, Gye Weon Hwang, Hyun Jae Jang

Deep spiking neural networks (SNNs) are promising neural networks for their model capacity from deep neural network architecture and energy efficiency from SNNs' operations.

Image Classification

M2Former: Multi-Scale Patch Selection for Fine-Grained Visual Recognition

no code implementations4 Aug 2023 Jiyong Moon, Junseok Lee, Yunju Lee, Seongsik Park

Therefore, we propose multi-scale patch selection (MSPS) to improve the multi-scale capabilities of existing ViT-based models.

Fine-Grained Visual Recognition Object

Cannot find the paper you are looking for? You can Submit a new open access paper.