1 code implementation • 26 Nov 2024 • Chen Li, Corey Lammie, Manuel Le Gallo, Bipin Rajendran
Moreover, it supports on-chip adaptation to new hardware constraints and tasks without updating analog weights, providing a flexible and versatile solution for real-world AI applications.
no code implementations • 7 Nov 2024 • Dengyu Wu, Jiechen Chen, Bipin Rajendran, H. Vincent Poor, Osvaldo Simeone
Inspired by biological processes, neuromorphic computing utilizes spiking neural networks (SNNs) to perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
no code implementations • 31 Oct 2024 • Zihang Song, Matteo Zecchin, Bipin Rajendran, Osvaldo Simeone
Sequence models have demonstrated the ability to perform tasks like channel equalization and symbol detection by automatically adapting to current channel conditions.
no code implementations • 7 Oct 2024 • Anagha Nimbekar, Prabodh Katti, Chen Li, Bashir M. Al-Hashimi, Amit Acharyya, Bipin Rajendran
In this paper, we develop a hardware-software co-optimisation strategy to port software-trained deep neural networks (DNN) to reduced-precision spiking models demonstrating fast and accurate inference in a novel event-driven CMOS reconfigurable spiking inference accelerator.
no code implementations • 26 Apr 2024 • Robert O Shea, Prabodh Katti, Bipin Rajendran
Robustness to artefacts was assessed by corrupting ECG data with sinusoidal baseline drift, shift, rescaling and noise, before encoding.
no code implementations • 9 Apr 2024 • Zihang Song, Osvaldo Simeone, Bipin Rajendran
In-context learning (ICL), a property demonstrated by transformer-based sequence models, refers to the automatic inference of an input-output mapping based on examples of the mapping provided as context.
no code implementations • 14 Feb 2024 • Zihang Song, Prabodh Katti, Osvaldo Simeone, Bipin Rajendran
Spiking Neural Networks (SNNs) have been recently integrated into Transformer architectures due to their potential to reduce computational demands and to improve power efficiency.
no code implementations • 27 Jan 2024 • Prabodh Katti, Anagha Nimbekar, Chen Li, Amit Acharyya, Bashir M. Al-Hashimi, Bipin Rajendran
Bayesian neural networks offer better estimates of model uncertainty compared to frequentist networks.
no code implementations • 12 Jan 2024 • Eva Lagunas, Flor Ortiz, Geoffrey Eappen, Saed Daoud, Wallace Alves Martins, Jorge Querol, Symeon Chatzinotas, Nicolas Skatchkovsky, Bipin Rajendran, Osvaldo Simeone
Spiking neural networks (SNNs) implemented on neuromorphic processors (NPs) can enhance the energy efficiency of deployments of artificial intelligence (AI) for specific workloads.
no code implementations • 8 Dec 2023 • Chen Li, Bipin Rajendran
Our research utilizes the ResNet model for a comprehensive analysis of the impact of the noise adaptor on low-latency SNNs.
no code implementations • 27 Sep 2023 • Bipin Rajendran, Osvaldo Simeone, Bashir M. Al-Hashimi
Artificial intelligence (AI) algorithms based on neural networks have been designed for decades with the goal of maximising some measure of accuracy.
no code implementations • 22 Aug 2023 • Flor Ortiz, Nicolas Skatchkovsky, Eva Lagunas, Wallace A. Martins, Geoffrey Eappen, Saed Daoud, Osvaldo Simeone, Bipin Rajendran, Symeon Chatzinotas
The latest satellite communication (SatCom) missions are characterized by a fully reconfigurable on-board software-defined payload, capable of adapting radio resources to the temporal and spatial variations of the system traffic.
no code implementations • 21 Apr 2023 • Yiming Ai, Bipin Rajendran
Brain-computer interfaces are being explored for a wide variety of therapeutic applications.
no code implementations • 2 Feb 2023 • Prabodh Katti, Nicolas Skatchkovsky, Osvaldo Simeone, Bipin Rajendran, Bashir M. Al-Hashimi
Bayesian Neural Networks (BNNs) can overcome the problem of overconfidence that plagues traditional frequentist deep neural networks, and are hence considered to be a key enabler for reliable AI systems.
no code implementations • 2 Nov 2021 • Bleema Rosenfeld, Osvaldo Simeone, Bipin Rajendran
Accordingly, a central problem in neuromorphic computing is training spiking neural networks (SNNs) to reproduce spatio-temporal spiking patterns in response to given spiking stimuli.
no code implementations • 21 Feb 2021 • Bleema Rosenfeld, Bipin Rajendran, Osvaldo Simeone
Spiking Neural Networks (SNNs) have recently gained popularity as machine learning models for on-device edge intelligence for applications such as mobile healthcare management and natural language processing due to their low power profile.
no code implementations • 10 Feb 2021 • Vinay Joshi, Wangxin He, Jae-sun Seo, Bipin Rajendran
We propose a hybrid in-memory computing (HIC) architecture for the training of DNNs on hardware accelerators that results in memory-efficient inference and outperforms baseline software accuracy in benchmark tasks.
no code implementations • 5 Aug 2020 • Anakha V Babu, Osvaldo Simeone, Bipin Rajendran
We discuss a high-performance and high-throughput hardware accelerator for probabilistic Spiking Neural Networks (SNNs) based on Generalized Linear Model (GLM) neurons, that uses binary STT-RAM devices as synapses and digital CMOS logic for neurons.
Human Activity Recognition Vocal Bursts Intensity Prediction
no code implementations • 30 Apr 2020 • Adnan Mehonic, Abu Sebastian, Bipin Rajendran, Osvaldo Simeone, Eleni Vasilaki, Anthony J. Kenyon
Machine learning, particularly in the form of deep learning, has driven most of the recent fundamental developments in artificial intelligence.
no code implementations • 25 Mar 2020 • Vinay Joshi, Geethan Karunaratne, Manuel Le Gallo, Irem Boybat, Christophe Piveteau, Abu Sebastian, Bipin Rajendran, Evangelos Eleftheriou
Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy.
no code implementations • 7 Jun 2019 • Vinay Joshi, Manuel Le Gallo, Irem Boybat, Simon Haefeli, Christophe Piveteau, Martino Dazzi, Bipin Rajendran, Abu Sebastian, Evangelos Eleftheriou
In-memory computing is a promising non-von Neumann approach where certain computational tasks are performed within memory units by exploiting the physical attributes of memory devices.
Emerging Technologies
no code implementations • 28 May 2019 • S. R. Nandakumar, Irem Boybat, Manuel Le Gallo, Evangelos Eleftheriou, Abu Sebastian, Bipin Rajendran
Combining the computational potential of supervised SNNs with the parallel compute power of computational memory, the work paves the way for next-generation of efficient brain-inspired systems.
no code implementations • 11 Jan 2019 • Bipin Rajendran, Abu Sebastian, Michael Schmuker, Narayan Srinivasa, Evangelos Eleftheriou
In this paper, we review some of the architectural and system level design aspects involved in developing a new class of brain-inspired information processing engines that mimic the time-based information encoding and processing aspects of the brain.
no code implementations • 23 Oct 2018 • Navin Anwani, Bipin Rajendran
To tackle this, first the problem of training a multi-layer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants.
no code implementations • 23 Oct 2018 • Bleema Rosenfeld, Osvaldo Simeone, Bipin Rajendran
In this work, the use of SNNs as stochastic policies is explored under an energy-efficient first-to-spike action rule, whereby the action taken by the RL agent is determined by the occurrence of the first spike among the output neurons.
no code implementations • 22 Feb 2018 • Alireza Bagheri, Osvaldo Simeone, Bipin Rajendran
Due to the prominence of Artificial Neural Networks (ANNs) as classifiers, their sensitivity to adversarial examples, as well as robust training schemes, have been recently the subject of intense investigation.
no code implementations • 9 Nov 2017 • Shruti R. Kulkarni, John M. Alexiades, Bipin Rajendran
On the standard MNIST database images of handwritten digits, our network achieves an accuracy of 99. 80% on the training set and 98. 06% on the test set, with nearly 7x fewer parameters compared to the state-of-the-art spiking networks.
no code implementations • 9 Nov 2017 • Anakha V Babu, Bipin Rajendran
We also study the performance of stochastic memristive DNNs when used as inference engines with noise corrupted data and find that if the device variability can be minimized, the relative degradation in performance for the Stochastic DNN is better than that of the software baseline.
1 code implementation • 29 Oct 2017 • Alireza Bagheri, Osvaldo Simeone, Bipin Rajendran
Third-generation neural networks, or Spiking Neural Networks (SNNs), aim at harnessing the energy efficiency of spike-domain processing by building on computing elements that operate on, and exchange, spikes.
no code implementations • NeurIPS 2016 • Pulkit Tandon, Yash H. Malviya, Bipin Rajendran
We demonstrate a spiking neural circuit for azimuth angle detection inspired by the echolocation circuits of the Horseshoe bat Rhinolophus ferrumequinum and utilize it to devise a model for navigation and target tracking, capturing several key aspects of information transmission in biology.
no code implementations • 29 Oct 2014 • Shibani Santurkar, Bipin Rajendran
We demonstrate a spiking neural network for navigation motivated by the chemotaxis network of Caenorhabditis elegans.
no code implementations • 29 Oct 2014 • Shibani Santurkar, Bipin Rajendran
In order to harness the computational advantages spiking neural networks promise over their non-spiking counterparts, we develop a network comprising 7-spiking neurons with non-plastic synapses which we show is extremely robust in tracking a range of concentrations.