Search Results for author: Corey Lammie

Found 14 papers, 8 papers with code

Accelerating Deterministic and Stochastic Binarized Neural Networks on FPGAs Using OpenCL

1 code implementation15 May 2019 Corey Lammie, Wei Xiang, Mostafa Rahimi Azghadi

Consequently, the performance and complexity of Artificial Neural Networks (ANNs) is burgeoning.

Variation-aware Binarized Memristive Networks

no code implementations14 Oct 2019 Corey Lammie, Olga Krestinskaya, Alex James, Mostafa Rahimi Azghadi

Moreover, we introduce means to mitigate the adverse effect of memristive variations in our proposed networks.

Quantization

Training Progressively Binarizing Deep Networks Using FPGAs

no code implementations8 Jan 2020 Corey Lammie, Wei Xiang, Mostafa Rahimi Azghadi

While hardware implementations of inference routines for Binarized Neural Networks (BNNs) are plentiful, current realizations of efficient BNN hardware training accelerators, suitable for Internet of Things (IoT) edge devices, leave much to be desired.

MemTorch: An Open-source Simulation Framework for Memristive Deep Learning Systems

1 code implementation23 Apr 2020 Corey Lammie, Wei Xiang, Bernabé Linares-Barranco, Mostafa Rahimi Azghadi

Memristive devices have shown great promise to facilitate the acceleration and improve the power efficiency of Deep Learning (DL) systems.

Emerging Technologies

Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications

1 code implementation11 Jul 2020 Mostafa Rahimi Azghadi, Corey Lammie, Jason K. Eshraghian, Melika Payvand, Elisa Donati, Bernabe Linares-Barranco, Giacomo Indiveri

The advent of dedicated Deep Learning (DL) accelerators and neuromorphic processors has brought on new opportunities for applying both Deep and Spiking Neural Network (SNN) algorithms to healthcare and biomedical applications at the edge.

Electromyography (EMG) Sensor Fusion

Memristive Stochastic Computing for Deep Learning Parameter Optimization

no code implementations11 Mar 2021 Corey Lammie, Jason K. Eshraghian, Wei D. Lu, Mostafa Rahimi Azghadi

Stochastic Computing (SC) is a computing paradigm that allows for the low-cost and low-power computation of various arithmetic operations using stochastic bit streams and digital logic.

A Deep Learning Localization Method for Measuring Abdominal Muscle Dimensions in Ultrasound Images

no code implementations30 Sep 2021 Alzayat Saleh, Issam H. Laradji, Corey Lammie, David Vazquez, Carol A Flavell, Mostafa Rahimi Azghadi

US images can be used to measure abdominal muscles dimensions for the diagnosis and creation of customized treatment plans for patients with Low Back Pain (LBP), however, they are difficult to interpret.

Design Space Exploration of Dense and Sparse Mapping Schemes for RRAM Architectures

no code implementations18 Jan 2022 Corey Lammie, Jason K. Eshraghian, Chenqi Li, Amirali Amirsoleimani, Roman Genov, Wei D. Lu, Mostafa Rahimi Azghadi

The impact of device and circuit-level effects in mixed-signal Resistive Random Access Memory (RRAM) accelerators typically manifest as performance degradation of Deep Learning (DL) algorithms, but the degree of impact varies based on algorithmic features.

Quantization

Navigating Local Minima in Quantized Spiking Neural Networks

1 code implementation15 Feb 2022 Jason K. Eshraghian, Corey Lammie, Mostafa Rahimi Azghadi, Wei D. Lu

Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms.

Navigate

Toward A Formalized Approach for Spike Sorting Algorithms and Hardware Evaluation

1 code implementation13 May 2022 Tim Zhang, Corey Lammie, Mostafa Rahimi Azghadi, Amirali Amirsoleimani, Majid Ahmadi, Roman Genov

Spike sorting algorithms are used to separate extracellular recordings of neuronal populations into single-unit spike activities.

Spike Sorting

AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing

1 code implementation17 May 2023 Hadjer Benmeziane, Corey Lammie, Irem Boybat, Malte Rasch, Manuel Le Gallo, Hsinyu Tsai, Ramachandran Muralidhar, Smail Niar, Ouarnoughi Hamza, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui

Digital processors based on typical von Neumann architectures are not conducive to edge AI given the large amounts of required data movement in and out of memory.

Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

1 code implementation18 Jul 2023 Manuel Le Gallo, Corey Lammie, Julian Buechel, Fabio Carta, Omobayode Fagbohungbe, Charles Mackin, Hsinyu Tsai, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui, Malte J. Rasch

In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github. com/IBM/aihwkit.

Cannot find the paper you are looking for? You can Submit a new open access paper.