no code implementations • 18 Jan 2024 • Vandan Gorade, Sparsh Mittal, Debesh Jha, Rekha Singhal, Ulas Bagci
This paper presents a novel approach that synergies spatial and spectral representations to enhance domain-generalized medical image segmentation.
no code implementations • 2 Dec 2023 • Tushir Sahu, Vidhi Bhatt, Sai Chandra Teja R, Sparsh Mittal, Nagesh Kumar S
A DIPC block combines the dilated involution layers pairwise into a pyramidal structure to convert the feature maps into a compact space.
no code implementations • 28 Nov 2023 • Vandan Gorade, Sparsh Mittal, Debesh Jha, Ulas Bagci
HLFD strategically distills knowledge from a combination of middle layers to earlier layers and transfers final layer knowledge to intermediate layers at both the feature and pixel levels.
no code implementations • 26 Oct 2023 • Vandan Gorade, Sparsh Mittal, Debesh Jha, Ulas Bagci
When evaluating skin lesion and brain tumor segmentation datasets, we observe a remarkable improvement of 1. 71% in Intersection-over Union scores for skin lesion segmentation and of 8. 58% for brain tumor segmentation.
no code implementations • 16 Jul 2023 • Krishna Teja Chitty-Venkata, Sparsh Mittal, Murali Emani, Venkatram Vishwanath, Arun K. Somani
This paper presents a comprehensive survey of techniques for optimizing the inference phase of transformer networks.
no code implementations • IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023 • Onkar Susladkar, Gayatri Deshmukh, Dhruv Makwana, Sparsh Mittal, R Sai Chandra Teja, Rekha Singhal
We introduce a novel network, GAFNet (Global Attention Fourier Net), which learns through large-scale pre-training over three image-text datasets (COCO, SBU, and CC-3M), for achieving high performance on downstream vision and language tasks.
1 code implementation • 26 Oct 2022 • Onkar Susladkar, Dhruv Makwana, Gayatri Deshmukh, Sparsh Mittal, Sai Chandra Teja R, Rekha Singhal
Further, we use a novel multi-headed decoder that generates a high-pass filtered image and a segmentation map, in addition to a text-free image.
1 code implementation • 13 Jul 2022 • Dhruv Makwana, Subhrajit Nag, Onkar Susladkar, Gayatri Deshmukh, Sai Chandra Teja R, Sparsh Mittal, C Krishna Mohan
We propose a novel deep learning model named ACLNet, for cloud segmentation from ground images.
Ranked #1 on Semantic Segmentation on SWINySEG
1 code implementation • 3 Jul 2022 • Subhrajit Nag, Dhruv Makwana, Sai Chandra Teja R, Sparsh Mittal, C Krishna Mohan
WSCN has a model size of only 0. 51MB and performs only 0. 2M FLOPS.
Ranked #1 on Semantic Segmentation on MixedWM38
1 code implementation • IEEE Conference on Dependable and Secure Computing (DSC) 2022 • Yash Khare, Kumud Lakara, Maruthi S Inukonda, Sparsh Mittal, Mahesh Chandra, Arvind Kaushik
In this paper, we present novel bit-flip attack (BFA) algorithms for DNNs, along with techniques for defending against the attack.
no code implementations • 6 Aug 2020 • Nandan Kumar Jha, Sparsh Mittal
arithmetic intensity, does not always correctly estimate the degree of data reuse in DNNs since it gives equal importance to all the data types.
no code implementations • 30 Jul 2020 • Nandan Kumar Jha, Sparsh Mittal, Binod Kumar, Govardhan Mattela
The remarkable predictive performance of deep neural networks (DNNs) has led to their adoption in service domains of unprecedented scale and scope.
no code implementations • 30 Jun 2020 • Nandan Kumar Jha, Rajat Saini, Sparsh Mittal
Surprisingly, in some cases, they surpass the accuracy of baseline networks even with the inferior teachers.
no code implementations • 26 Jun 2020 • Nandan Kumar Jha, Shreyas Ravishankar, Sparsh Mittal, Arvind Kaushik, Dipan Mandal, Mahesh Chandra
The number of processing elements (PEs) in a fixed-sized systolic accelerator is well matched for large and compute-bound DNNs; whereas, memory-bound DNNs suffer from PE underutilization and fail to achieve peak performance and energy efficiency.
1 code implementation • 26 Jun 2020 • Rajat Saini, Nandan Kumar Jha, Bedanta Das, Sparsh Mittal, C. Krishna Mohan
Our method of subspace attention is orthogonal and complementary to the existing state-of-the-arts attention mechanisms used in vision models.
no code implementations • 26 Jun 2020 • Nandan Kumar Jha, Sparsh Mittal, Govardhan Mattela
Reducing the number of parameters in DNNs increases the number of activations which, in turn, increases the memory footprint.
no code implementations • 26 Jun 2020 • Nandan Kumar Jha, Rajat Saini, Subhrajit Nag, Sparsh Mittal
We show that, at comparable computational complexity, DNNs with constant group size (E2GC) are more energy-efficient than DNNs with a fixed number of groups (F$g$GC).