Search Results for author: Siddharth Joshi

Found 21 papers, 6 papers with code

Slax: A Composable JAX Library for Rapid and Flexible Prototyping of Spiking Neural Networks

no code implementations8 Apr 2024 Thomas M. Summe, Siddharth Joshi

Recent advances to algorithms for training spiking neural networks (SNNs) often leverage their unique dynamics.

Investigating the Benefits of Projection Head for Representation Learning

no code implementations18 Mar 2024 Yihao Xue, Eric Gan, Jiayi Ni, Siddharth Joshi, Baharan Mirzasoleiman

An effective technique for obtaining high-quality representations is adding a projection head on top of the encoder during training, then discarding it and using the pre-projection representations.

Contrastive Learning Data Augmentation +1

Data-Efficient Contrastive Language-Image Pretraining: Prioritizing Data Quality over Quantity

1 code implementation18 Mar 2024 Siddharth Joshi, Arnav Jain, Ali Payani, Baharan Mirzasoleiman

We show that subsets that closely preserve the cross-covariance of the images and captions of the full data provably achieve a superior generalization performance.

Zero-shot Generalization

Improvements in Interlayer Pipelining of CNN Accelerators Using Genetic Algorithms

no code implementations20 Nov 2023 Mark Horeni, Siddharth Joshi

Deploying Convolutional Neural Networks (CNNs) on edge platforms necessitates efficient hardware acceleration.

Estimating Post-Synaptic Effects for Online Training of Feed-Forward SNNs

1 code implementation7 Nov 2023 Thomas Summe, Clemens JS Schaefer, Siddharth Joshi

We show improved scaling for multi-layer networks using a novel approximation of temporal effects on the subsequent layer's activity.

Understanding the Robustness of Multi-modal Contrastive Learning to Distribution Shift

no code implementations8 Oct 2023 Yihao Xue, Siddharth Joshi, Dang Nguyen, Baharan Mirzasoleiman

Recently, multimodal contrastive learning (MMCL) approaches, such as CLIP, have achieved a remarkable success in learning representations that are robust against distribution shift and generalize to new domains.

Contrastive Learning Zero-Shot Learning

Hadamard Domain Training with Integers for Class Incremental Quantized Learning

no code implementations5 Oct 2023 Martin Schiemer, Clemens JS Schaefer, Jayden Parker Vap, Mark James Horeni, Yu Emma Wang, Juan Ye, Siddharth Joshi

In this paper, we propose a technique that leverages inexpensive Hadamard transforms to enable low-precision training with only integer matrix multiplications.

Class Incremental Learning Human Activity Recognition +2

Towards Mitigating Spurious Correlations in the Wild: A Benchmark and a more Realistic Dataset

1 code implementation21 Jun 2023 Siddharth Joshi, Yu Yang, Yihao Xue, Wenhan Yang, Baharan Mirzasoleiman

Deep neural networks often exploit non-predictive features that are spuriously correlated with class labels, leading to poor performance on groups of examples without such features.

Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision Post-Training Quantization

no code implementations8 Jun 2023 Clemens JS Schaefer, Navid Lambert-Shirzad, Xiaofan Zhang, Chiachen Chou, Tom Jablin, Jian Li, Elfie Guo, Caitlin Stanton, Siddharth Joshi, Yu Emma Wang

To address this challenge, we propose a mixed-precision post training quantization (PTQ) approach that assigns different numerical precisions to tensors in a network based on their specific needs, for a reduced memory footprint and improved latency while preserving model accuracy.

Quantization

Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression

no code implementations25 May 2023 Yihao Xue, Siddharth Joshi, Eric Gan, Pin-Yu Chen, Baharan Mirzasoleiman

However, supervised CL is prone to collapsing representations of subclasses within a class by not capturing all their features, and unsupervised CL may suppress harder class-relevant features by focusing on learning easy class-irrelevant features; both significantly compromise representation quality.

Contrastive Learning Representation Learning

NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

1 code implementation10 Apr 2023 Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Denis Kleyko, Noah Pacik-Nelson, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Yao-Hong Liu, Shih-Chii Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Alessandro Pierro, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Matthew Stewart, Kenneth Stewart, Terrence C. Stewart, Philipp Stratmann, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi

The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings.

Benchmarking

Data-Efficient Contrastive Self-supervised Learning: Most Beneficial Examples for Supervised Learning Contribute the Least

2 code implementations18 Feb 2023 Siddharth Joshi, Baharan Mirzasoleiman

In this work, we address this problem for the first time, by proving that examples that contribute the most to contrastive SSL are those that have the most similar augmentations to other examples, in expectation.

Contrastive Learning Open-Ended Question Answering +1

The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks

1 code implementation8 Feb 2023 Clemens JS Schaefer, Pooria Taheri, Mark Horeni, Siddharth Joshi

Energy efficient implementations and deployments of Spiking neural networks (SNNs) have been of great interest due to the possibility of developing artificial systems that can achieve the computational powers and energy efficiency of the biological brain.

Gesture Recognition Quantization

Mixed Precision Post Training Quantization of Neural Networks with Sensitivity Guided Search

no code implementations2 Feb 2023 Clemens JS Schaefer, Elfie Guo, Caitlin Stanton, Xiaofan Zhang, Tom Jablin, Navid Lambert-Shirzad, Jian Li, Chiachen Chou, Siddharth Joshi, Yu Emma Wang

In this paper, we propose a method to efficiently determine quantization configurations of different tensors in ML models using post-training mixed precision quantization.

Quantization

Edge Inference with Fully Differentiable Quantized Mixed Precision Neural Networks

no code implementations15 Jun 2022 Clemens JS Schaefer, Siddharth Joshi, Shan Li, Raul Blazquez

Quantizing the parameters and operations to lower bit-precision offers substantial memory and energy savings for neural network inference, facilitating the use of DNNs on edge computing platforms.

Edge-computing Quantization

Edge AI without Compromise: Efficient, Versatile and Accurate Neurocomputing in Resistive Random-Access Memory

no code implementations17 Aug 2021 Weier Wan, Rajkumar Kubendran, Clemens Schaefer, S. Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss, Priyanka Raina, He Qian, Bin Gao, Siddharth Joshi, Huaqiang Wu, H. -S. Philip Wong, Gert Cauwenberghs

Realizing today's cloud-level artificial intelligence functionalities directly on devices distributed at the edge of the internet calls for edge hardware capable of processing multiple modalities of sensory data (e. g. video, audio) at unprecedented energy-efficiency.

Image Classification Image Reconstruction

Analog vs. Digital Spatial Transforms: A Throughput, Power, and Area Comparison

no code implementations15 Sep 2020 Zephan M. Enciso, Seyed Hadi Mirfarshbafan, Oscar Castañeda, Clemens JS. Schaefer, Christoph Studer, Siddharth Joshi

Spatial linear transforms that process multiple parallel analog signals to simplify downstream signal processing find widespread use in multi-antenna communication systems, machine learning inference, data compression, audio and ultrasound applications, among many others.

Data Compression

Memory Organization for Energy-Efficient Learning and Inference in Digital Neuromorphic Accelerators

no code implementations5 Mar 2020 Clemens JS Schaefer, Patrick Faley, Emre O. Neftci, Siddharth Joshi

The energy efficiency of neuromorphic hardware is greatly affected by the energy of storing, accessing, and updating synaptic parameters.

Training a Probabilistic Graphical Model with Resistive Switching Electronic Synapses

no code implementations27 Sep 2016 S. Burc Eryilmaz, Emre Neftci, Siddharth Joshi, Sang-Bum Kim, Matthew BrightSky, Hsiang-Lan Lung, Chung Lam, Gert Cauwenberghs, H. -S. Philip Wong

Current large scale implementations of deep learning and data mining require thousands of processors, massive amounts of off-chip memory, and consume gigajoules of energy.

Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity

no code implementations11 Jul 2016 Bruno U. Pedroni, Sadique Sheik, Siddharth Joshi, Georgios Detorakis, Somnath Paul, Charles Augustine, Emre Neftci, Gert Cauwenberghs

We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation.

Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

no code implementations14 Nov 2015 Emre O. Neftci, Bruno U. Pedroni, Siddharth Joshi, Maruan Al-Shedivat, Gert Cauwenberghs

Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex.

Cannot find the paper you are looking for? You can Submit a new open access paper.