Search Results for author: Seongsik Park

Found 14 papers, 0 papers with code

AutoSNN: Towards Energy-Efficient Spiking Neural Networks

no code implementations30 Jan 2022 Byunggook Na, Jisoo Mok, Seongsik Park, Dongjin Lee, Hyeokjun Choe, Sungroh Yoon

We investigate the design choices used in the previous studies in terms of the accuracy and number of spikes and figure out that they are not best-suited for SNNs.

Neural Architecture Search

Scalable Smartphone Cluster for Deep Learning

no code implementations23 Oct 2021 Byunggook Na, Jaehee Jang, Seongsik Park, Seijoon Kim, Joonoo Kim, Moon Sik Jeong, Kwang Choon Kim, Seon Heo, Yoonsang Kim, Sungroh Yoon

We implemented large-batch synchronous training of DNNs based on Caffe, a deep learning library.

Improving Sentence-Level Relation Extraction through Curriculum Learning

no code implementations20 Jul 2021 Seongsik Park, Harksoo Kim

Sentence-level relation extraction mainly aims to classify the relation between two entities in a sentence.

Relation Extraction

Energy-efficient Knowledge Distillation for Spiking Neural Networks

no code implementations14 Jun 2021 Dongjin Lee, Seongsik Park, Jongwan Kim, Wuhyeong Doh, Sungroh Yoon

On MNIST dataset, our proposed student SNN achieves up to 0. 09% higher accuracy and produces 65% less spikes compared to the student SNN trained with conventional knowledge distillation method.

Knowledge Distillation Model Compression +1

Training Energy-Efficient Deep Spiking Neural Networks with Time-to-First-Spike Coding

no code implementations4 Jun 2021 Seongsik Park, Sungroh Yoon

With TTFS coding, each neuron generates one spike at most, which leads to a significant improvement in energy efficiency.

Noise-Robust Deep Spiking Neural Networks with Temporal Information

no code implementations22 Apr 2021 Seongsik Park, Dongjin Lee, Sungroh Yoon

Spiking neural networks (SNNs) have emerged as energy-efficient neural networks with temporal information.

Dual Pointer Network for Fast Extraction of Multiple Relations in a Sentence

no code implementations5 Mar 2021 Seongsik Park, Harksoo Kim

The proposed model finds n-to-1 subject-object relations using a forward object decoder.

 Ranked #1 on Relation Extraction on ACE 2005 (Relation classification F1 metric)

Relation Extraction

T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding

no code implementations26 Mar 2020 Seongsik Park, Seijoon Kim, Byunggook Na, Sungroh Yoon

Spiking neural networks (SNNs) have gained considerable interest due to their energy-efficient characteristics, yet lack of a scalable training algorithm has restricted their applicability in practical machine learning problems.

Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection

no code implementations12 Mar 2019 Seijoon Kim, Seongsik Park, Byunggook Na, Sungroh Yoon

Over the past decade, deep neural networks (DNNs) have demonstrated remarkable performance in a variety of applications.

Image Classification Real-Time Object Detection

Fast and Efficient Information Transmission with Burst Spikes in Deep Spiking Neural Networks

no code implementations10 Sep 2018 Seongsik Park, Seijoon Kim, Hyeokjun Choe, Sungroh Yoon

The spiking neural networks (SNNs) are considered as one of the most promising artificial neural networks due to their energy efficient computing capability.

Image Classification

Quantized Memory-Augmented Neural Networks

no code implementations10 Nov 2017 Seongsik Park, Seijoon Kim, Seil Lee, Ho Bae, Sungroh Yoon

In this paper, we identify memory addressing (specifically, content-based addressing) as the main reason for the performance degradation and propose a robust quantization method for MANNs to address the challenge.

Quantization

An Efficient Approach to Boosting Performance of Deep Spiking Network Training

no code implementations8 Nov 2016 Seongsik Park, Sang-gil Lee, Hyunha Nam, Sungroh Yoon

In order to eliminate this workaround, recently proposed is a new class of SNN named deep spiking networks (DSNs), which can be trained directly (without a mapping from conventional deep networks) by error backpropagation with stochastic gradient descent.

Near-Data Processing for Differentiable Machine Learning Models

no code implementations6 Oct 2016 Hyeokjun Choe, Seil Lee, Hyunha Nam, Seongsik Park, Seijoon Kim, Eui-Young Chung, Sungroh Yoon

The second is the popularity of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing.

Cannot find the paper you are looking for? You can Submit a new open access paper.