Search Results for author: Gina C. Adam

Found 5 papers, 0 papers with code

Layer Ensemble Averaging for Improving Memristor-Based Artificial Neural Network Performance

no code implementations24 Apr 2024 Osama Yousuf, Brian Hoskins, Karthick Ramu, Mitchell Fream, William A. Borders, Advait Madhavan, Matthew W. Daniels, Andrew Dienstfrey, Jabez J. McClelland, Martin Lueker-Boden, Gina C. Adam

Results demonstrate that by trading off the number of devices required for layer mapping, layer ensemble averaging can reliably boost defective memristive network performance up to the software baseline.

Continual Learning

Biologically-Informed Excitatory and Inhibitory Balance for Robust Spiking Neural Network Training

no code implementations24 Apr 2024 Joseph A. Kilgore, Jeffrey D. Kopsick, Giorgio A. Ascoli, Gina C. Adam

Spiking neural networks drawing inspiration from biological constraints of the brain promise an energy-efficient paradigm for artificial intelligence.

Device Modeling Bias in ReRAM-based Neural Network Simulations

no code implementations29 Nov 2022 Osama Yousuf, Imtiaz Hossen, Matthew W. Daniels, Martin Lueker-Boden, Andrew Dienstfrey, Gina C. Adam

Data-driven modeling approaches such as jump tables are promising techniques to model populations of resistive random-access memory (ReRAM) or other emerging memory devices for hardware neural network simulations.

Benchmarking

Memory-efficient training with streaming dimensionality reduction

no code implementations25 Apr 2020 Siyuan Huang, Brian D. Hoskins, Matthew W. Daniels, Mark D. Stiles, Gina C. Adam

The movement of large quantities of data during the training of a Deep Neural Network presents immense challenges for machine learning workloads.

BIG-bench Machine Learning Dimensionality Reduction

Streaming Batch Eigenupdates for Hardware Neuromorphic Networks

no code implementations5 Mar 2019 Brian D. Hoskins, Matthew W. Daniels, Siyuan Huang, Advait Madhavan, Gina C. Adam, Nikolai Zhitenev, Jabez J. McClelland, Mark D. Stiles

Neuromorphic networks based on nanodevices, such as metal oxide memristors, phase change memories, and flash memory cells, have generated considerable interest for their increased energy efficiency and density in comparison to graphics processing units (GPUs) and central processing units (CPUs).

Cannot find the paper you are looking for? You can Submit a new open access paper.