Search Results for author: Runchun Wang

Found 11 papers, 2 papers with code

Single-bit-per-weight deep convolutional neural networks without batch-normalization layers for embedded systems

1 code implementation16 Jul 2019 Mark D. McDonnell, Hesham Mostafa, Runchun Wang, Andre van Schaik

We found, following experiments with wide residual networks applied to the ImageNet, CIFAR 10 and CIFAR 100 image classification datasets, that BN layers do not consistently offer a significant advantage.

Ranked #94 on Image Classification on CIFAR-100 (using extra training data)

General Classification Image Classification

Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain

no code implementations23 May 2018 Chetan Singh Thakur, Jamal Molin, Gert Cauwenberghs, Giacomo Indiveri, Kundan Kumar, Ning Qiao, Johannes Schemmel, Runchun Wang, Elisabetta Chicca, Jennifer Olson Hasler, Jae-sun Seo, Shimeng Yu, Yu Cao, André van Schaik, Ralph Etienne-Cummings

Neuromorphic engineering (NE) encompasses a diverse range of approaches to information processing that are inspired by neurobiological systems, and this feature distinguishes neuromorphic systems from conventional computing systems.

An FPGA-based Massively Parallel Neuromorphic Cortex Simulator

no code implementations8 Mar 2018 Runchun Wang, Chetan Singh Thakur, Andre van Schaik

This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex.

A Stochastic Approach to STDP

no code implementations13 Mar 2016 Runchun Wang, Chetan Singh Thakur, Tara Julia Hamilton, Jonathan Tapson, André van Schaik

The decay generator will then generate an exponential decay, which will be used by the STDP adaptor to perform the weight adaption.

8k

A compact aVLSI conductance-based silicon neuron

no code implementations3 Sep 2015 Runchun Wang, Chetan Singh Thakur, Tara Julia Hamilton, Jonathan Tapson, Andre van Schaik

We present an analogue Very Large Scale Integration (aVLSI) implementation that uses first-order lowpass filters to implement a conductance-based silicon neuron for high-speed neuromorphic systems.

A neuromorphic hardware architecture using the Neural Engineering Framework for pattern recognition

1 code implementation21 Jul 2015 Runchun Wang, Chetan Singh Thakur, Tara Julia Hamilton, Jonathan Tapson, Andre van Schaik

The architecture is not limited to handwriting recognition, but is generally applicable as an extremely fast pattern recognition processor for various kinds of patterns such as speech and images.

Handwriting Recognition Handwritten Digit Recognition

A Trainable Neuromorphic Integrated Circuit that Exploits Device Mismatch

no code implementations10 Jul 2015 Chetan Singh Thakur, Runchun Wang, Tara Julia Hamilton, Jonathan Tapson, Andre van Schaik

Additionally, we characterise each neuron and discuss the statistical variability of its tuning curve that arises due to random device mismatch, a desirable property for the learning capability of the TAB.

An Online Learning Algorithm for Neuromorphic Hardware Implementation

no code implementations11 May 2015 Chetan Singh Thakur, Runchun Wang, Saeed Afshar, Gregory Cohen, Tara Julia Hamilton, Jonathan Tapson, Andre van Schaik

We propose a sign-based online learning (SOL) algorithm for a neuromorphic hardware framework called Trainable Analogue Block (TAB).

regression

A neuromorphic hardware framework based on population coding

no code implementations2 Mar 2015 Chetan Singh Thakur, Tara Julia Hamilton, Runchun Wang, Jonathan Tapson, André van Schaik

These neuronal populations are characterised by a diverse distribution of tuning curves, ensuring that the entire range of input stimuli is encoded.

The Ripple Pond: Enabling Spiking Networks to See

no code implementations13 Jun 2013 Saeed Afshar, Gregory Cohen, Runchun Wang, Andre van Schaik, Jonathan Tapson, Torsten Lehmann, Tara Julia Hamilton

In this paper we present the biologically inspired Ripple Pond Network (RPN), a simply connected spiking neural network that, operating together with recently proposed PolyChronous Networks (PCN), enables rapid, unsupervised, scale and rotation invariant object recognition using efficient spatio-temporal spike coding.

Object Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.