Search Results for author: Ajay Joshi

Found 12 papers, 0 papers with code

Field of Groves: An Energy-Efficient Random Forest

no code implementations10 Apr 2017 Zafar Takhirov, Joseph Wang, Marcia S. Louis, Venkatesh Saligrama, Ajay Joshi

In this work, we present a field of groves (FoG) implementation of random forests (RF) that achieves an accuracy comparable to CNNs and SVMs under tight energy budgets.

General Classification

The efficacy of various machine learning models for multi-class classification of RNA-seq expression data

no code implementations19 Aug 2019 Sterling Ramroach, Melford John, Ajay Joshi

When the features were reduced to a list of 20 genes, the ensemble algorithms maintained an accuracy above 95% as opposed to the clustering and classification models.

BIG-bench Machine Learning Classification +3

CUDA optimized Neural Network predicts blood glucose control from quantified joint mobility and anthropometrics

no code implementations19 Aug 2019 Sterling Ramroach, Andrew Dhanoo, Brian Cockburn, Ajay Joshi

In this paper, we leveraged the power of Nvidia GPUs to parallelize all of the computation involved in training, to accelerate a backpropagation feed-forward neural network with one hidden layer using CUDA and C++.

Custom Tailored Suite of Random Forests for Prefetcher Adaptation

no code implementations1 Aug 2020 Furkan Eris, Sadullah Canakci, Cansu Demirkiran, Ajay Joshi

To close the gap between memory and processors, and in turn improve performance, there has been an abundance of work in the area of data/instruction prefetcher designs.

Efficient Sealable Protection Keys for RISC-V

no code implementations4 Dec 2020 Leila Delshadtehrani, Sadullah Canakci, Manuel Egele, Ajay Joshi

Recently, Intel introduced a new hardware feature for intra-process memory isolation, called Memory Protection Keys (MPK), which enables a user-space process to switch the domains in an efficient way.

Cryptography and Security Hardware Architecture

Puppeteer: A Random Forest-based Manager for Hardware Prefetchers across the Memory Hierarchy

no code implementations28 Jan 2022 Furkan Eris, Marcia S. Louis, Kubra Eris, Jose L. Abellan, Ajay Joshi

In this work, we propose Puppeteer, which is a hardware prefetcher manager that uses a suite of random forest regressors to determine at runtime which prefetcher should be ON at each level in the memory hierarchy, such that the prefetchers complement each other and we reduce the data/instruction access latency.

Leveraging Residue Number System for Designing High-Precision Analog Deep Neural Network Accelerators

no code implementations15 Jun 2023 Cansu Demirkiran, Rashmi Agrawal, Vijay Janapa Reddi, Darius Bunandar, Ajay Joshi

In addition, we show that RNS can reduce the energy consumption of the data converters within an analog accelerator by several orders of magnitude compared to a regular fixed-point approach.

A Blueprint for Precise and Fault-Tolerant Analog Neural Networks

no code implementations19 Sep 2023 Cansu Demirkiran, Lakshmi Nair, Darius Bunandar, Ajay Joshi

Our study demonstrates that analog accelerators utilizing the RNS-based approach can achieve ${\geq}99\%$ of FP32 accuracy for state-of-the-art DNN inference using data converters with only $6$-bit precision whereas a conventional analog core requires more than $8$-bit precision to achieve the same accuracy in the same DNNs.

Accelerating DNN Training With Photonics: A Residue Number System-Based Design

no code implementations29 Nov 2023 Cansu Demirkiran, Guowei Yang, Darius Bunandar, Ajay Joshi

Photonic computing is a compelling avenue for performing highly efficient matrix multiplication, a crucial operation in Deep Neural Networks (DNNs).

Towards Efficient Hyperdimensional Computing Using Photonics

no code implementations29 Nov 2023 Farbin Fayza, Cansu Demirkiran, Hanning Chen, Che-Kai Liu, Avi Mohan, Hamza Errahmouni, Sanggeon Yun, Mohsen Imani, David Zhang, Darius Bunandar, Ajay Joshi

Over the past few years, silicon photonics-based computing has emerged as a promising alternative to CMOS-based computing for Deep Neural Networks (DNN).

Photonics for Sustainable Computing

no code implementations10 Jan 2024 Farbin Fayza, Satyavolu Papa Rao, Darius Bunandar, Udit Gupta, Ajay Joshi

Our analysis shows that photonics can reduce both operational and embodied carbon footprints with its high energy efficiency and at least 4$\times$ less fabrication carbon cost per unit area than 28 nm CMOS.

Cannot find the paper you are looking for? You can Submit a new open access paper.