no code implementations • 23 Jan 2024 • Prasanna Date, Dong Jun Woun, Kathleen Hamilton, Eduardo A. Coello Perez, Mayanka Chandra Shekhar, Francisco Rios, John Gounley, In-Saeng Suh, Travis Humble, Georgia Tourassi

Finally, we perform a scalability study in which we compute the total training times of the quantum approach and the classical approach with increasing number of features and number of data points in the training dataset.

no code implementations • 20 Jul 2023 • Shruti R. Kulkarni, Aaron Young, Prasanna Date, Narasinga Rao Miniskar, Jeffrey S. Vetter, Farah Fahim, Benjamin Parpillon, Jennet Dickinson, Nhan Tran, Jieun Yoo, Corrinne Mills, Morris Swartz, Petar Maksimovic, Catherine D. Schuman, Alice Bean

We present our insights on the various system design choices - from data encoding to optimal hyperparameters of the training algorithm - for an accurate and compact SNN optimized for hardware deployment.

no code implementations • 19 Jul 2023 • Jinyang Li, Zhepeng Wang, Zhirui Hu, Prasanna Date, Ang Li, Weiwen Jiang

The results of the evaluation on the standard dataset for binary classification show that ST-VQC can achieve over 30% accuracy improvement compared with existing VQCs on actual quantum computers.

1 code implementation • 4 May 2023 • Prasanna Date, Chathika Gunaratne, Shruti Kulkarni, Robert Patton, Mark Coletti, Thomas Potok

Currently available simulators are catered to either neuroscience workflows (such as NEST and Brian2) or deep learning workflows (such as BindsNET).

no code implementations • 15 Aug 2022 • Prasanna Date, Shruti Kulkarni, Aaron Young, Catherine Schuman, Thomas Potok, Jeffrey Vetter

They are expected to be indispensable for energy-efficient computing in the future.

no code implementations • 5 Jan 2022 • Davis Arthur, Prasanna Date

On simulated hardware, we observe that the hybrid neural network achieves roughly 10% higher classification accuracy and 20% better minimization of cost than an individual variational quantum circuit.

1 code implementation • 1 Dec 2021 • David Quiroga, Prasanna Date, Raphael C. Pooser

Quantum machine learning (QML) algorithms have obtained great relevance in the machine learning (ML) field due to the promise of quantum speedups when performing basic linear algebra subroutines (BLAS), a fundamental element in most ML algorithms.

no code implementations • 28 Apr 2021 • Prasanna Date, Catherine Schuman, Bill Kay, Thomas Potok

Given that the {\mu}-recursive functions and operators are precisely the ones that can be computed using a Turing machine, this work establishes the Turing-completeness of neuromorphic computing.

no code implementations • 2 Sep 2020 • Prasanna Date, Wyatt Smith

Quantum computers have the unique ability to operate relatively quickly in high-dimensional spaces -- this is sought to give them a competitive advantage over classical computers.

no code implementations • 1 Sep 2020 • Prasanna Date, Christopher D. Carothers, John E. Mitchell, James A. Hendler, Malik Magdon-Ismail

We believe that deep neural networks (DNNs), where learning parameters are constrained to have a set of finite discrete values, running on neuromorphic computing systems would be instrumental for intelligent edge computing systems having these desirable characteristics.

no code implementations • 17 Aug 2020 • Lauren Pusey-Nazzaro, Prasanna Date

In this work, we attempt to solve the integer-weight knapsack problem using the D-Wave 2000Q adiabatic quantum computer.

no code implementations • 10 Aug 2020 • Davis Arthur, Prasanna Date

We present a quantum approach to solving the balanced $k$-means clustering training problem on the D-Wave 2000Q adiabatic quantum computer.

1 code implementation • 5 Aug 2020 • Prasanna Date, Davis Arthur, Lauren Pusey-Nazzaro

In this paper, we formulate the training problems of three machine learning models---linear regression, support vector machine (SVM) and equal-sized k-means clustering---as QUBO problems so that they can be trained on adiabatic quantum computers efficiently.

no code implementations • 5 Aug 2020 • Prasanna Date, Thomas Potok

A major challenge in machine learning is the computational expense of training these models.

no code implementations • 21 Apr 2020 • Maryam Parsa, Catherine D. Schuman, Prasanna Date, Derek C. Rose, Bill Kay, J. Parker Mitchell, Steven R. Young, Ryan Dellana, William Severa, Thomas E. Potok, Kaushik Roy

In this work, we introduce a Bayesian approach for optimizing the hyperparameters of an algorithm for training binary communication networks that can be deployed to neuromorphic hardware.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.