Search Results for author: Bipin Rajendran

Found 27 papers, 1 papers with code

Neuromorphic In-Context Learning for Energy-Efficient MIMO Symbol Detection

no code implementations9 Apr 2024 Zihang Song, Osvaldo Simeone, Bipin Rajendran

In-context learning (ICL), a property demonstrated by transformer-based sequence models, refers to the automatic inference of an input-output mapping based on examples of the mapping provided as context.

In-Context Learning

Stochastic Spiking Attention: Accelerating Attention with Stochastic Computing in Spiking Networks

no code implementations14 Feb 2024 Zihang Song, Prabodh Katti, Osvaldo Simeone, Bipin Rajendran

Spiking Neural Networks (SNNs) have been recently integrated into Transformer architectures due to their potential to reduce computational demands and to improve power efficiency.

Bayesian Inference Accelerator for Spiking Neural Networks

no code implementations27 Jan 2024 Prabodh Katti, Anagha Nimbekar, Chen Li, Amit Acharyya, Bashir M. Al-Hashimi, Bipin Rajendran

Bayesian neural networks offer better estimates of model uncertainty compared to frequentist networks.

Bayesian Inference

Performance Evaluation of Neuromorphic Hardware for Onboard Satellite Communication Applications

no code implementations12 Jan 2024 Eva Lagunas, Flor Ortiz, Geoffrey Eappen, Saed Daoud, Wallace Alves Martins, Jorge Querol, Symeon Chatzinotas, Nicolas Skatchkovsky, Bipin Rajendran, Osvaldo Simeone

Spiking neural networks (SNNs) implemented on neuromorphic processors (NPs) can enhance the energy efficiency of deployments of artificial intelligence (AI) for specific workloads.

Noise Adaptor in Spiking Neural Networks

no code implementations8 Dec 2023 Chen Li, Bipin Rajendran

Our research utilizes the ResNet model for a comprehensive analysis of the impact of the noise adaptor on low-latency SNNs.

Towards Efficient and Trustworthy AI Through Hardware-Algorithm-Communication Co-Design

no code implementations27 Sep 2023 Bipin Rajendran, Osvaldo Simeone, Bashir M. Al-Hashimi

Artificial intelligence (AI) algorithms based on neural networks have been designed for decades with the goal of maximising some measure of accuracy.

Decision Making Uncertainty Quantification

Energy-Efficient On-Board Radio Resource Management for Satellite Communications via Neuromorphic Computing

no code implementations22 Aug 2023 Flor Ortiz, Nicolas Skatchkovsky, Eva Lagunas, Wallace A. Martins, Geoffrey Eappen, Saed Daoud, Osvaldo Simeone, Bipin Rajendran, Symeon Chatzinotas

The latest satellite communication (SatCom) missions are characterized by a fully reconfigurable on-board software-defined payload, capable of adapting radio resources to the temporal and spatial variations of the system traffic.

Management

Bayesian Inference on Binary Spiking Networks Leveraging Nanoscale Device Stochasticity

no code implementations2 Feb 2023 Prabodh Katti, Nicolas Skatchkovsky, Osvaldo Simeone, Bipin Rajendran, Bashir M. Al-Hashimi

Bayesian Neural Networks (BNNs) can overcome the problem of overconfidence that plagues traditional frequentist deep neural networks, and are hence considered to be a key enabler for reliable AI systems.

Bayesian Inference

Spiking Generative Adversarial Networks With a Neural Network Discriminator: Local Training, Bayesian Models, and Continual Meta-Learning

no code implementations2 Nov 2021 Bleema Rosenfeld, Osvaldo Simeone, Bipin Rajendran

Accordingly, a central problem in neuromorphic computing is training spiking neural networks (SNNs) to reproduce spatio-temporal spiking patterns in response to given spiking stimuli.

Generative Adversarial Network Meta-Learning

Fast On-Device Adaptation for Spiking Neural Networks via Online-Within-Online Meta-Learning

no code implementations21 Feb 2021 Bleema Rosenfeld, Bipin Rajendran, Osvaldo Simeone

Spiking Neural Networks (SNNs) have recently gained popularity as machine learning models for on-device edge intelligence for applications such as mobile healthcare management and natural language processing due to their low power profile.

Management Meta-Learning

Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks

no code implementations10 Feb 2021 Vinay Joshi, Wangxin He, Jae-sun Seo, Bipin Rajendran

We propose a hybrid in-memory computing (HIC) architecture for the training of DNNs on hardware accelerators that results in memory-efficient inference and outperforms baseline software accuracy in benchmark tasks.

SpinAPS: A High-Performance Spintronic Accelerator for Probabilistic Spiking Neural Networks

no code implementations5 Aug 2020 Anakha V Babu, Osvaldo Simeone, Bipin Rajendran

We discuss a high-performance and high-throughput hardware accelerator for probabilistic Spiking Neural Networks (SNNs) based on Generalized Linear Model (GLM) neurons, that uses binary STT-RAM devices as synapses and digital CMOS logic for neurons.

Human Activity Recognition Vocal Bursts Intensity Prediction

ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

no code implementations25 Mar 2020 Vinay Joshi, Geethan Karunaratne, Manuel Le Gallo, Irem Boybat, Christophe Piveteau, Abu Sebastian, Bipin Rajendran, Evangelos Eleftheriou

Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy.

Accurate deep neural network inference using computational phase-change memory

no code implementations7 Jun 2019 Vinay Joshi, Manuel Le Gallo, Irem Boybat, Simon Haefeli, Christophe Piveteau, Martino Dazzi, Bipin Rajendran, Abu Sebastian, Evangelos Eleftheriou

In-memory computing is a promising non-von Neumann approach where certain computational tasks are performed within memory units by exploiting the physical attributes of memory devices.

Emerging Technologies

Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses

no code implementations28 May 2019 S. R. Nandakumar, Irem Boybat, Manuel Le Gallo, Evangelos Eleftheriou, Abu Sebastian, Bipin Rajendran

Combining the computational potential of supervised SNNs with the parallel compute power of computational memory, the work paves the way for next-generation of efficient brain-inspired systems.

Low-Power Neuromorphic Hardware for Signal Processing Applications

no code implementations11 Jan 2019 Bipin Rajendran, Abu Sebastian, Michael Schmuker, Narayan Srinivasa, Evangelos Eleftheriou

In this paper, we review some of the architectural and system level design aspects involved in developing a new class of brain-inspired information processing engines that mimic the time-based information encoding and processing aspects of the brain.

BIG-bench Machine Learning

Learning First-to-Spike Policies for Neuromorphic Control Using Policy Gradients

no code implementations23 Oct 2018 Bleema Rosenfeld, Osvaldo Simeone, Bipin Rajendran

In this work, the use of SNNs as stochastic policies is explored under an energy-efficient first-to-spike action rule, whereby the action taken by the RL agent is determined by the occurrence of the first spike among the output neurons.

Reinforcement Learning (RL)

Training Multi-layer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation

no code implementations23 Oct 2018 Navin Anwani, Bipin Rajendran

To tackle this, first the problem of training a multi-layer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants.

Adversarial Training for Probabilistic Spiking Neural Networks

no code implementations22 Feb 2018 Alireza Bagheri, Osvaldo Simeone, Bipin Rajendran

Due to the prominence of Artificial Neural Networks (ANNs) as classifiers, their sensitivity to adversarial examples, as well as robust training schemes, have been recently the subject of intense investigation.

Learning and Real-time Classification of Hand-written Digits With Spiking Neural Networks

no code implementations9 Nov 2017 Shruti R. Kulkarni, John M. Alexiades, Bipin Rajendran

On the standard MNIST database images of handwritten digits, our network achieves an accuracy of 99. 80% on the training set and 98. 06% on the test set, with nearly 7x fewer parameters compared to the state-of-the-art spiking networks.

General Classification

Stochastic Deep Learning in Memristive Networks

no code implementations9 Nov 2017 Anakha V Babu, Bipin Rajendran

We also study the performance of stochastic memristive DNNs when used as inference engines with noise corrupted data and find that if the device variability can be minimized, the relative degradation in performance for the Stochastic DNN is better than that of the software baseline.

Training Probabilistic Spiking Neural Networks with First-to-spike Decoding

1 code implementation29 Oct 2017 Alireza Bagheri, Osvaldo Simeone, Bipin Rajendran

Third-generation neural networks, or Spiking Neural Networks (SNNs), aim at harnessing the energy efficiency of spike-domain processing by building on computing elements that operate on, and exchange, spikes.

Classification Early Classification +1

Efficient and Robust Spiking Neural Circuit for Navigation Inspired by Echolocating Bats

no code implementations NeurIPS 2016 Pulkit Tandon, Yash H. Malviya, Bipin Rajendran

We demonstrate a spiking neural circuit for azimuth angle detection inspired by the echolocation circuits of the Horseshoe bat Rhinolophus ferrumequinum and utilize it to devise a model for navigation and target tracking, capturing several key aspects of information transmission in biology.

Computational Efficiency

Sub-threshold CMOS Spiking Neuron Circuit Design for Navigation Inspired by C. elegans Chemotaxis

no code implementations29 Oct 2014 Shibani Santurkar, Bipin Rajendran

We demonstrate a spiking neural network for navigation motivated by the chemotaxis network of Caenorhabditis elegans.

A neural circuit for navigation inspired by C. elegans Chemotaxis

no code implementations29 Oct 2014 Shibani Santurkar, Bipin Rajendran

In order to harness the computational advantages spiking neural networks promise over their non-spiking counterparts, we develop a network comprising 7-spiking neurons with non-plastic synapses which we show is extremely robust in tracking a range of concentrations.

Cannot find the paper you are looking for? You can Submit a new open access paper.