Search Results for author: Geethan Karunaratne

Found 15 papers, 3 papers with code

Zero-shot Classification using Hyperdimensional Computing

no code implementations30 Jan 2024 Samuele Ruffino, Geethan Karunaratne, Michael Hersche, Luca Benini, Abu Sebastian, Abbas Rahimi

Classification based on Zero-shot Learning (ZSL) is the ability of a model to classify inputs into novel classes on which the model has not previously seen any training examples.

Attribute Attribute Extraction +2

TCNCA: Temporal Convolution Network with Chunked Attention for Scalable Sequence Processing

no code implementations9 Dec 2023 Aleksandar Terzic, Michael Hersche, Geethan Karunaratne, Luca Benini, Abu Sebastian, Abbas Rahimi

We build upon their approach by replacing the linear recurrence with a special temporal convolutional network which permits larger receptive field size with shallower networks, and reduces the computational complexity to $O(L)$.

Language Modelling

MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition

1 code implementation NeurIPS 2023 Nicolas Menet, Michael Hersche, Geethan Karunaratne, Luca Benini, Abu Sebastian, Abbas Rahimi

MIMONets augment various deep neural network architectures with variable binding mechanisms to represent an arbitrary number of inputs in a compositional data structure via fixed-width distributed representations.

Factorizers for Distributed Sparse Block Codes

no code implementations24 Mar 2023 Michael Hersche, Aleksandar Terzic, Geethan Karunaratne, Jovin Langenegger, Angéline Pouget, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

We provide a methodology to flexibly integrate our factorizer in the classification layer of CNNs with a novel loss function.

Attribute

In-memory factorization of holographic perceptual representations

1 code implementation9 Nov 2022 Jovin Langenegger, Geethan Karunaratne, Michael Hersche, Luca Benini, Abu Sebastian, Abbas Rahimi

Disentanglement of constituent factors of a sensory signal is central to perception and cognition and hence is a critical task for future artificial intelligence systems.

Disentanglement

In-memory Realization of In-situ Few-shot Continual Learning with a Dynamically Evolving Explicit Memory

no code implementations14 Jul 2022 Geethan Karunaratne, Michael Hersche, Jovin Langenegger, Giovanni Cherubini, Manuel Le Gallo-Bourdeau, Urs Egger, Kevin Brew, Sam Choi, INJO OK, Mary Claire Silvestre, Ning li, Nicole Saulnier, Victor Chan, Ishtiaq Ahsan, Vijay Narayanan, Luca Benini, Abu Sebastian, Abbas Rahimi

We demonstrate for the first time how the EM unit can physically superpose multiple training examples, expand to accommodate unseen classes, and perform similarity search during inference, using operations on an IMC core based on phase-change memory (PCM).

Continual Learning

Constrained Few-shot Class-incremental Learning

2 code implementations CVPR 2022 Michael Hersche, Geethan Karunaratne, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

Moreover, it is imperative that such learning must respect certain memory and computational constraints such as (i) training samples are limited to only a few per class, (ii) the computational cost of learning a novel class remains constant, and (iii) the memory footprint of the model grows at most linearly with the number of classes observed.

continual few-shot learning Few-Shot Class-Incremental Learning +1

Generalized Key-Value Memory to Flexibly Adjust Redundancy in Memory-Augmented Networks

no code implementations11 Mar 2022 Denis Kleyko, Geethan Karunaratne, Jan M. Rabaey, Abu Sebastian, Abbas Rahimi

Memory-augmented neural networks enhance a neural network with an external key-value memory whose complexity is typically dominated by the number of support vectors in the key memory.

A Heterogeneous In-Memory Computing Cluster For Flexible End-to-End Inference of Real-World Deep Neural Networks

no code implementations4 Jan 2022 Angelo Garofalo, Gianmarco Ottavi, Francesco Conti, Geethan Karunaratne, Irem Boybat, Luca Benini, Davide Rossi

Furthermore, we explore the requirements for end-to-end inference of a full mobile-grade DNN (MobileNetV2) in terms of IMC array resources, by scaling up our heterogeneous architecture to a multi-array accelerator.

Robust High-dimensional Memory-augmented Neural Networks

no code implementations5 Oct 2020 Geethan Karunaratne, Manuel Schmuck, Manuel Le Gallo, Giovanni Cherubini, Luca Benini, Abu Sebastian, Abbas Rahimi

Traditional neural networks require enormous amounts of data to build their complex mappings during a slow training procedure that hinders their abilities for relearning and adapting to new data.

Few-Shot Image Classification Vocal Bursts Intensity Prediction

ChewBaccaNN: A Flexible 223 TOPS/W BNN Accelerator

no code implementations12 May 2020 Renzo Andri, Geethan Karunaratne, Lukas Cavigelli, Luca Benini

Furthermore, it can perform inference on a binarized ResNet-18 trained with 8-bases Group-Net to achieve a 67. 5% Top-1 accuracy with only 3. 0 mJ/frame -- at an accuracy drop of merely 1. 8% from the full-precision ResNet-18.

ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

no code implementations25 Mar 2020 Vinay Joshi, Geethan Karunaratne, Manuel Le Gallo, Irem Boybat, Christophe Piveteau, Abu Sebastian, Bipin Rajendran, Evangelos Eleftheriou

Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy.

In-memory hyperdimensional computing

no code implementations4 Jun 2019 Geethan Karunaratne, Manuel Le Gallo, Giovanni Cherubini, Luca Benini, Abbas Rahimi, Abu Sebastian

Hyperdimensional computing (HDC) is an emerging computational framework that takes inspiration from attributes of neuronal circuits such as hyperdimensionality, fully distributed holographic representation, and (pseudo)randomness.

Attribute Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.