Search Results for author: Abu Sebastian

Found 30 papers, 9 papers with code

Zero-shot Classification using Hyperdimensional Computing

no code implementations30 Jan 2024 Samuele Ruffino, Geethan Karunaratne, Michael Hersche, Luca Benini, Abu Sebastian, Abbas Rahimi

Classification based on Zero-shot Learning (ZSL) is the ability of a model to classify inputs into novel classes on which the model has not previously seen any training examples.

Attribute Attribute Extraction +2

Probabilistic Abduction for Visual Abstract Reasoning via Learning Rules in Vector-symbolic Architectures

1 code implementation29 Jan 2024 Michael Hersche, Francesco Di Stefano, Thomas Hofmann, Abu Sebastian, Abbas Rahimi

Abstract reasoning is a cornerstone of human intelligence, and replicating it with artificial intelligence (AI) presents an ongoing challenge.

Attribute

TCNCA: Temporal Convolution Network with Chunked Attention for Scalable Sequence Processing

no code implementations9 Dec 2023 Aleksandar Terzic, Michael Hersche, Geethan Karunaratne, Luca Benini, Abu Sebastian, Abbas Rahimi

We build upon their approach by replacing the linear recurrence with a special temporal convolutional network which permits larger receptive field size with shallower networks, and reduces the computational complexity to $O(L)$.

Language Modelling

MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition

1 code implementation NeurIPS 2023 Nicolas Menet, Michael Hersche, Geethan Karunaratne, Luca Benini, Abu Sebastian, Abbas Rahimi

MIMONets augment various deep neural network architectures with variable binding mechanisms to represent an arbitrary number of inputs in a compositional data structure via fixed-width distributed representations.

Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

1 code implementation18 Jul 2023 Manuel Le Gallo, Corey Lammie, Julian Buechel, Fabio Carta, Omobayode Fagbohungbe, Charles Mackin, Hsinyu Tsai, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui, Malte J. Rasch

In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github. com/IBM/aihwkit.

AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing

1 code implementation17 May 2023 Hadjer Benmeziane, Corey Lammie, Irem Boybat, Malte Rasch, Manuel Le Gallo, Hsinyu Tsai, Ramachandran Muralidhar, Smail Niar, Ouarnoughi Hamza, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui

Digital processors based on typical von Neumann architectures are not conducive to edge AI given the large amounts of required data movement in and out of memory.

Factorizers for Distributed Sparse Block Codes

no code implementations24 Mar 2023 Michael Hersche, Aleksandar Terzic, Geethan Karunaratne, Jovin Langenegger, Angéline Pouget, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

We provide a methodology to flexibly integrate our factorizer in the classification layer of CNNs with a novel loss function.

Attribute

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

no code implementations16 Feb 2023 Malte J. Rasch, Charles Mackin, Manuel Le Gallo, An Chen, Andrea Fasoli, Frederic Odermatt, Ning li, S. R. Nandakumar, Pritish Narayanan, Hsinyu Tsai, Geoffrey W. Burr, Abu Sebastian, Vijay Narayanan

Analog in-memory computing (AIMC) -- a promising approach for energy-efficient acceleration of deep learning workloads -- computes matrix-vector multiplications (MVMs) but only approximately, due to nonidealities that often are non-deterministic or nonlinear.

In-memory factorization of holographic perceptual representations

1 code implementation9 Nov 2022 Jovin Langenegger, Geethan Karunaratne, Michael Hersche, Luca Benini, Abu Sebastian, Abbas Rahimi

Disentanglement of constituent factors of a sensory signal is central to perception and cognition and hence is a critical task for future artificial intelligence systems.

Disentanglement

Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics

no code implementations21 Sep 2022 Dominique J. Kösters, Bryan A. Kortman, Irem Boybat, Elena Ferro, Sagar Dolas, Roberto de Austri, Johan Kwisthout, Hans Hilgenkamp, Theo Rasing, Heike Riel, Abu Sebastian, Sascha Caron, Johan H. Mentink

The massive use of artificial neural networks (ANNs), increasingly popular in many areas of scientific computing, rapidly increases the energy consumption of modern high-performance computing systems.

Anomaly Detection Benchmarking

In-memory Realization of In-situ Few-shot Continual Learning with a Dynamically Evolving Explicit Memory

no code implementations14 Jul 2022 Geethan Karunaratne, Michael Hersche, Jovin Langenegger, Giovanni Cherubini, Manuel Le Gallo-Bourdeau, Urs Egger, Kevin Brew, Sam Choi, INJO OK, Mary Claire Silvestre, Ning li, Nicole Saulnier, Victor Chan, Ishtiaq Ahsan, Vijay Narayanan, Luca Benini, Abu Sebastian, Abbas Rahimi

We demonstrate for the first time how the EM unit can physically superpose multiple training examples, expand to accommodate unseen classes, and perform similarity search during inference, using operations on an IMC core based on phase-change memory (PCM).

Continual Learning

Constrained Few-shot Class-incremental Learning

2 code implementations CVPR 2022 Michael Hersche, Geethan Karunaratne, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

Moreover, it is imperative that such learning must respect certain memory and computational constraints such as (i) training samples are limited to only a few per class, (ii) the computational cost of learning a novel class remains constant, and (iii) the memory footprint of the model grows at most linearly with the number of classes observed.

continual few-shot learning Few-Shot Class-Incremental Learning +1

Generalized Key-Value Memory to Flexibly Adjust Redundancy in Memory-Augmented Networks

no code implementations11 Mar 2022 Denis Kleyko, Geethan Karunaratne, Jan M. Rabaey, Abu Sebastian, Abbas Rahimi

Memory-augmented neural networks enhance a neural network with an external key-value memory whose complexity is typically dominated by the number of support vectors in the key memory.

A Neuro-vector-symbolic Architecture for Solving Raven's Progressive Matrices

1 code implementation9 Mar 2022 Michael Hersche, Mustafa Zeqiri, Luca Benini, Abu Sebastian, Abbas Rahimi

Compared to state-of-the-art deep neural network and neuro-symbolic approaches, end-to-end training of NVSA achieves a new record of 87. 7% average accuracy in RAVEN, and 88. 1% in I-RAVEN datasets.

Logical Reasoning

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

no code implementations10 Nov 2021 Chuteng Zhou, Fernando Garcia Redondo, Julian Büchel, Irem Boybat, Xavier Timoneda Comas, S. R. Nandakumar, Shidhartha Das, Abu Sebastian, Manuel Le Gallo, Paul N. Whatmough

We also describe AON-CiM, a programmable, minimal-area phase-change memory (PCM) analog CiM accelerator, with a novel layer-serial approach to remove the cost of complex interconnects associated with a fully-pipelined design.

Keyword Spotting

A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

1 code implementation5 Apr 2021 Malte J. Rasch, Diego Moreda, Tayfun Gokmen, Manuel Le Gallo, Fabio Carta, Cindy Goldberg, Kaoutar El Maghraoui, Abu Sebastian, Vijay Narayanan

We introduce the IBM Analog Hardware Acceleration Kit, a new and first of a kind open source toolkit to simulate analog crossbar arrays in a convenient fashion from within PyTorch (freely available at https://github. com/IBM/aihwkit).

Robust High-dimensional Memory-augmented Neural Networks

no code implementations5 Oct 2020 Geethan Karunaratne, Manuel Schmuck, Manuel Le Gallo, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data.

Few-Shot Image Classification Vocal Bursts Intensity Prediction

Optimality of short-term synaptic plasticity in modelling certain dynamic environments

no code implementations15 Sep 2020 Timoleon Moraitis, Abu Sebastian, Evangelos Eleftheriou

Biological neurons and their in-silico emulations for neuromorphic artificial intelligence (AI) use extraordinarily energy-efficient mechanisms, such as spike-based communication and local synaptic plasticity.

Bayesian Inference

File Classification Based on Spiking Neural Networks

no code implementations8 Apr 2020 Ana Stanojevic, Giovanni Cherubini, Timoleon Moraitis, Abu Sebastian

In this paper, we propose a system for file classification in large data sets based on spiking neural networks (SNNs).

Classification General Classification +1

ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

no code implementations25 Mar 2020 Vinay Joshi, Geethan Karunaratne, Manuel Le Gallo, Irem Boybat, Christophe Piveteau, Abu Sebastian, Bipin Rajendran, Evangelos Eleftheriou

Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy.

Compiling Neural Networks for a Computational Memory Accelerator

1 code implementation5 Mar 2020 Kornilios Kourtis, Martino Dazzi, Nikolas Ioannou, Tobias Grosser, Abu Sebastian, Evangelos Eleftheriou

Computational memory (CM) is a promising approach for accelerating inference on neural networks (NN) by using enhanced memories that, in addition to storing data, allow computations on them.

5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory

no code implementations8 Jun 2019 Martino Dazzi, Abu Sebastian, Pier Andrea Francese, Thomas Parnell, Luca Benini, Evangelos Eleftheriou

We show that this communication fabric facilitates the pipelined execution of all state of-the-art CNNs by proving the existence of a homomorphism between one graph representation of these networks and the proposed graph topology.

Accurate deep neural network inference using computational phase-change memory

no code implementations7 Jun 2019 Vinay Joshi, Manuel Le Gallo, Irem Boybat, Simon Haefeli, Christophe Piveteau, Martino Dazzi, Bipin Rajendran, Abu Sebastian, Evangelos Eleftheriou

In-memory computing is a promising non-von Neumann approach where certain computational tasks are performed within memory units by exploiting the physical attributes of memory devices.

Emerging Technologies

In-memory hyperdimensional computing

no code implementations4 Jun 2019 Geethan Karunaratne, Manuel Le Gallo, Giovanni Cherubini, Luca Benini, Abbas Rahimi, Abu Sebastian

Hyperdimensional computing (HDC) is an emerging computational framework that takes inspiration from attributes of neuronal circuits such as hyperdimensionality, fully distributed holographic representation, and (pseudo)randomness.

Attribute Classification +4

Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses

no code implementations28 May 2019 S. R. Nandakumar, Irem Boybat, Manuel Le Gallo, Evangelos Eleftheriou, Abu Sebastian, Bipin Rajendran

Combining the computational potential of supervised SNNs with the parallel compute power of computational memory, the work paves the way for next-generation of efficient brain-inspired systems.

Low-Power Neuromorphic Hardware for Signal Processing Applications

no code implementations11 Jan 2019 Bipin Rajendran, Abu Sebastian, Michael Schmuker, Narayan Srinivasa, Evangelos Eleftheriou

In this paper, we review some of the architectural and system level design aspects involved in developing a new class of brain-inspired information processing engines that mimic the time-based information encoding and processing aspects of the brain.

BIG-bench Machine Learning

Fatiguing STDP: Learning from Spike-Timing Codes in the Presence of Rate Codes

no code implementations17 Jun 2017 Timoleon Moraitis, Abu Sebastian, Irem Boybat, Manuel Le Gallo, Tomas Tuma, Evangelos Eleftheriou

However, some spike-timing-related strengths of SNNs are hindered by the sensitivity of spike-timing-dependent plasticity (STDP) rules to input spike rates, as fine temporal correlations may be obstructed by coarser correlations between firing rates.

Mixed-Precision In-Memory Computing

no code implementations16 Jan 2017 Manuel Le Gallo, Abu Sebastian, Roland Mathis, Matteo Manica, Heiner Giefers, Tomas Tuma, Costas Bekas, Alessandro Curioni, Evangelos Eleftheriou

As CMOS scaling reaches its technological limits, a radical departure from traditional von Neumann systems, which involve separate processing and memory units, is needed in order to significantly extend the performance of today's computers.

Emerging Technologies

Cannot find the paper you are looking for? You can Submit a new open access paper.