Search Results for author: Neelam Sinha

Found 15 papers, 5 papers with code

Overcoming Scene Context Constraints for Object Detection in wild using Defilters

no code implementations12 Apr 2024 Vamshi Krishna Kancharla, Neelam Sinha

Our experiments demonstrate that utilizing defiltered images significantly improves mean average precision compared to training object detection models on distorted images.

Object object-detection +1

Towards understanding the nature of direct functional connectivity in visual brain network

no code implementations18 Mar 2024 Debanjali Bhattacharya, Neelam Sinha

In image complexity-specific VBN classification, XGBoost yields average accuracy in the range of 86. 5% to 91. 5% for positively correlated VBN, which is 2% greater than that using negative correlation.

Multi-scale fMRI time series analysis for understanding neurodegeneration in MCI

no code implementations5 Feb 2024 Ammu R., Debanjali Bhattacharya, Ameiy Acharya, Ninad Aithal, Neelam Sinha

The proposed approach is employed for classification of a cohort of 50 healthy control (HC) and 50 Mild Cognitive Impairment (MCI), sourced from ADNI dataset.

Time Series Time Series Analysis

Localizing and Assessing Node Significance in Default Mode Network using Sub-Community Detection in Mild Cognitive Impairment

no code implementations4 Dec 2023 Ameiy Acharya, Chakka Sai Pradeep, Neelam Sinha

After computing the NSS of each ROI in both healthy and MCI subjects, we quantify the score disparity to identify nodes most impacted by MCI.

Community Detection

MCI Detection using fMRI time series embeddings of Recurrence plots

1 code implementation30 Nov 2023 Ninad Aithal, Chakka Sai Pradeep, Neelam Sinha

Utilizing resting state fMRI time series imaging, we can study the underlying dynamics at ear-marked Regions of Interest (ROIs) to understand structure or lack thereof.

Time Series

Investigating the changes in BOLD responses during viewing of images with varied complexity: An fMRI time-series based analysis on human vision

1 code implementation27 Sep 2023 Naveen Kanigiri, Manohar Suggula, Debanjali Bhattacharya, Neelam Sinha

The obtained result of this analysis has established a baseline in studying how differently human brain functions while looking into images of diverse complexities.

Semantic Segmentation Time Series

Identification of Stochasticity by Matrix-decomposition: Applied on Black Hole Data

1 code implementation15 Jul 2023 Sai Pradeep Chakka, Sunil Kumar Vengalil, Neelam Sinha

Proposed algorithm is applied on astronomical data: 12 temporal-classes of timeseries of black hole GRS 1915+105, obtained from RXTE satellite with average length 25000.

Identifying Stochasticity in Time-Series with Autoencoder-Based Content-aware 2D Representation: Application to Black Hole Data

1 code implementation23 Apr 2023 Chakka Sai Pradeep, Neelam Sinha

An autoencoder is trained with a loss function to learn latent space (using both time- and frequency domains) representation, that is designed to be, time-invariant.

Time Series

SSEGEP: Small SEGment Emphasized Performance evaluation metric for medical image segmentation

no code implementations8 Sep 2021 Ammu R, Neelam Sinha

To address this, we propose a novel evaluation metric for segmentation performance, emphasizing smaller segments, by assigning higher weightage to smaller segment pixels.

Image Segmentation Medical Image Segmentation +2

Using Topological Framework for the Design of Activation Function and Model Pruning in Deep Neural Networks

no code implementations3 Sep 2021 Yogesh Kochar, Sunil Kumar Vengalil, Neelam Sinha

Two independent contributions of this paper are 1) Novel activation function for faster training convergence 2) Systematic pruning of filters of models trained irrespective of activation function.

Binary Classification speech-recognition +1

Towards Learning a Vocabulary of Visual Concepts and Operators using Deep Neural Networks

no code implementations1 Sep 2021 Sunil Kumar Vengalil, Neelam Sinha

Deep neural networks have become the default choice for many applications like image and video recognition, segmentation and other image and video related tasks. However, a critical challenge with these models is the lack of explainability. This requirement of generating explainable predictions has motivated the research community to perform various analysis on trained models. In this study, we analyze the learned feature maps of trained models using MNIST images for achieving more explainable predictions. Our study is focused on deriving a set of primitive elements, here called visual concepts, that can be used to generate any arbitrary sample from the data generating distribution. We derive the primitive elements from the feature maps learned by the model. We illustrate the idea by generating visual concepts from a Variational Autoencoder trained using MNIST images. We augment the training data of MNIST dataset by adding about 60, 000 new images generated with visual concepts chosen at random. With this we were able to reduce the reconstruction loss (mean square error) from an initial value of 120 without augmentation to 60 with augmentation. Our approach is a first step towards the final goal of achieving trained deep neural network models whose predictions, features in hidden layers and the learned filters can be well explained. Such a model when deployed in production can easily be modified to adapt to new data, whereas existing deep learning models need a re training or fine tuning.

Video Recognition

Cognitive state classification using transformed fMRI data

no code implementations19 Apr 2016 Hariharan Ramasangu, Neelam Sinha

The novelty of the proposed method lies in utilizing the phase information in the transformed domain, for classifying between the cognitive tasks along with random sieve function chosen with a particular probability distribution.

Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.