no code implementations • 27 Sep 2021 • Yeshwanth Bethi, Ying Xu, Gregory Cohen, Andre van Schaik, Saeed Afshar
Through the use of simple local adaptive selection thresholds at each node, the network rapidly learns to appropriately allocate its neuronal resources at each layer for any given problem without using a real-valued error measure.
no code implementations • 20 Nov 2019 • Saeed Afshar, Andrew P Nicholson, Andre van Schaik, Gregory Cohen
In this work, we present optical space imaging using an unconventional yet promising class of imaging devices known as neuromorphic event-based sensors.
1 code implementation • 16 Jul 2019 • Mark D. McDonnell, Hesham Mostafa, Runchun Wang, Andre van Schaik
We found, following experiments with wide residual networks applied to the ImageNet, CIFAR 10 and CIFAR 100 image classification datasets, that BN layers do not consistently offer a significant advantage.
Ranked #94 on Image Classification on CIFAR-100 (using extra training data)
2 code implementations • 7 Dec 2018 • Tat-Jun Chin, Samya Bagchi, Anders Eriksson, Andre van Schaik
Star trackers are primarily optical devices that are used to estimate the attitude of a spacecraft by recognising and tracking star patterns.
no code implementations • 8 Mar 2018 • Runchun Wang, Chetan Singh Thakur, Andre van Schaik
This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex.
no code implementations • 14 Mar 2016 • Saeed Afshar, Gregory Cohen, Tara Julia Hamilton, Jonathan Tapson, Andre van Schaik
This variance motivated the investigation of event-based decaying memory surfaces in comparison to time-based decaying memory surfaces to capture the temporal aspect of the event-based data.
no code implementations • 3 Sep 2015 • Ying Xu, Chetan Singh Thakur, Tara Julia Hamilton, Jonathan Tapson, Runchun Wang, Andre van Schaik
The architecture consists of an analogue chip and a control module.
no code implementations • 3 Sep 2015 • Runchun Wang, Chetan Singh Thakur, Tara Julia Hamilton, Jonathan Tapson, Andre van Schaik
We present an analogue Very Large Scale Integration (aVLSI) implementation that uses first-order lowpass filters to implement a conductance-based silicon neuron for high-speed neuromorphic systems.
1 code implementation • 21 Jul 2015 • Runchun Wang, Chetan Singh Thakur, Tara Julia Hamilton, Jonathan Tapson, Andre van Schaik
The architecture is not limited to handwriting recognition, but is generally applicable as an extremely fast pattern recognition processor for various kinds of patterns such as speech and images.
no code implementations • 10 Jul 2015 • Chetan Singh Thakur, Runchun Wang, Tara Julia Hamilton, Jonathan Tapson, Andre van Schaik
Additionally, we characterise each neuron and discuss the statistical variability of its tuning curve that arises due to random device mismatch, a desirable property for the learning capability of the TAB.
no code implementations • 11 May 2015 • Chetan Singh Thakur, Runchun Wang, Saeed Afshar, Gregory Cohen, Tara Julia Hamilton, Jonathan Tapson, Andre van Schaik
We propose a sign-based online learning (SOL) algorithm for a neuromorphic hardware framework called Trainable Analogue Block (TAB).
no code implementations • 11 Nov 2014 • Saeed Afshar, Libin George, Jonathan Tapson, Andre van Schaik, Philip de Chazal, Tara Julia Hamilton
We have added a simplified neuromorphic model of Spike Time Dependent Plasticity (STDP) to the Synapto-dendritic Kernel Adapting Neuron (SKAN).
no code implementations • 6 Aug 2014 • Saeed Afshar, Libin George, Jonathan Tapson, Andre van Schaik, Tara Julia Hamilton
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns.
no code implementations • 13 Jun 2013 • Saeed Afshar, Gregory Cohen, Runchun Wang, Andre van Schaik, Jonathan Tapson, Torsten Lehmann, Tara Julia Hamilton
In this paper we present the biologically inspired Ripple Pond Network (RPN), a simply connected spiking neural network that, operating together with recently proposed PolyChronous Networks (PCN), enables rapid, unsupervised, scale and rotation invariant object recognition using efficient spatio-temporal spike coding.