no code implementations • 5 Mar 2024 • Ankur Singh
The model, implemented through a hybrid Python and PyTorch approach, accounts for various non-idealities, achieving exceptional training accuracies of 90. 02% and 91. 03% for the CIFAR-10 dataset with memristive and memcapacitive crossbar arrays on an 8-layer VGG network.
no code implementations • 4 Mar 2024 • Ankur Singh, Sanghyeon Choi, Gunuk Wang, Maryaradhiya Daimari, Byung-Geun Lee
Reservoir computing (RC) offers a neuromorphic framework that is particularly effective for processing spatiotemporal signals.
no code implementations • 14 Feb 2024 • J. Senthilnath, Adithya Bhattiprolu, Ankur Singh, Bangjian Zhou, Min Wu, Jón Atli Benediktsson, XiaoLi Li
A novel online clustering algorithm is presented where an Evolving Restricted Boltzmann Machine (ERBM) is embedded with a Kohonen Network called ERBM-KNet.
no code implementations • 12 May 2023 • Rajdeep Dutta, Qincheng Wang, Ankur Singh, Dhruv Kumarjiguda, Li Xiaoli, Senthilnath Jayavelu
This paper presents a novel RL algorithm, S-REINFORCE, which is designed to generate interpretable policies for dynamic decision-making tasks.
no code implementations • 14 Feb 2023 • Ankur Singh, Senthilnath Jayavelu
Despite the recent success of deep neural networks, there remains a need for effective methods to enhance domain generalization using vision transformers.
no code implementations • 18 Apr 2022 • Ankur Singh, Piyush Rai
The proposed semi-supervised technique can be used as a plug-and-play module with any supervised GAN-based Super-Resolution method to enhance its performance.
no code implementations • 15 Jun 2021 • Dwarikanath Mahapatra, Ankur Singh
While medical image segmentation is an important task for computer aided diagnosis, the high expertise requirement for pixelwise manual annotations makes it a challenging and time consuming task.
no code implementations • 9 Jun 2020 • Sarath Sivaprasad, Ankur Singh, Naresh Manwani, Vineet Gandhi
In this paper, we investigate a constrained formulation of neural networks where the output is a convex function of the input.
no code implementations • 30 Jan 2020 • Ankur Singh
Although deep learning performs really well in a wide variety of tasks, it still suffers from catastrophic forgetting -- the tendency of neural networks to forget previously learned information upon learning new tasks where previous data is not available.
no code implementations • 7 Dec 2018 • Ankur Singh, Anurag Chanani, Harish Karnick
Later on, in this paper we show how this technique can help to save bandwidth usage to upto three times while transmitting raw colored videos.