no code implementations • 20 Dec 2024 • Anish Chakrabarty, Arkaprabha Basu, Swagatam Das
The Gromov-Wasserstein (GW) distance is an effective measure of alignment between distributions supported on distinct ambient spaces.
no code implementations • 15 Nov 2024 • Sagar Ghosh, Kushal Bose, Swagatam Das
The emergence of Deep Convolutional Neural Networks (DCNNs) has been a pervasive tool for accomplishing widespread applications in computer vision.
no code implementations • 16 Sep 2024 • Nikhil Raghav, Avisek Gupta, Md Sahidullah, Swagatam Das
Spectral clustering has proven effective in grouping speech representations for speaker diarization tasks, although post-processing the affinity matrix remains difficult due to the need for careful tuning before constructing the Laplacian.
no code implementations • 16 Sep 2024 • Nikhil Raghav, Subhajit Saha, Md Sahidullah, Swagatam Das
In this report, we describe the speaker diarization (SD) and language diarization (LD) systems developed by our team for the Second DISPLACE Challenge, 2024.
no code implementations • 14 Sep 2024 • Sagar Ghosh, Swagatam Das
However, with the rapid evolution of data complexity, Euclidean Space is proving to be inefficient for representing and learning algorithms.
no code implementations • 9 Apr 2024 • Arkaprabha Basu, Kushal Bose, Sankha Subhra Mullick, Anish Chakrabarty, Swagatam Das
Super-Resolution (SR) is a time-hallowed image processing problem that aims to improve the quality of a Low-Resolution (LR) sample up to the standard of its High-Resolution (HR) counterpart.
no code implementations • 31 Mar 2024 • Srinjoy Roy, Swagatam Das
Accounting for the uncertainty of value functions boosts exploration in Reinforcement Learning (RL).
1 code implementation • 24 Mar 2024 • Arindam Majee, Avisek Gupta, Sourav Raha, Swagatam Das
Alzheimer's disease (AD), characterized by progressive cognitive decline and memory loss, presents a formidable global health challenge, underscoring the critical importance of early and precise diagnosis for timely interventions and enhanced patient outcomes.
1 code implementation • 21 Mar 2024 • Subhajit Saha, Md Sahidullah, Swagatam Das
In contrast to existing methods that fine-tune SSL models and employ additional deep neural networks for downstream tasks, we exploit classical machine learning algorithms such as logistic regression and shallow neural networks using the SSL embeddings extracted using the pre-trained model.
Ranked #2 on
Voice Anti-spoofing
on ASVspoof 2019 - LA
1 code implementation • 11 Dec 2023 • Kushal Bose, Swagatam Das
Graph Transformers (GTs) facilitate the comprehension of graph-structured data by calculating the self-attention of node pairs without considering node position information.
no code implementations • 11 Dec 2023 • Anish Chakrabarty, Arkaprabha Basu, Swagatam Das
Variational Autoencoders (VAEs) have been a pioneering force in the realm of deep generative models.
no code implementations • 26 Nov 2023 • Supratik Basu, Jyotishka Ray Choudhury, Debolina Paul, Swagatam Das
On a different note, another limitation in many commonly used clustering methods is the need to specify the number of clusters beforehand.
no code implementations • 25 Aug 2023 • Yanjie Song, Yutong Wu, Yangyang Guo, Ran Yan, P. N. Suganthan, Yue Zhang, Witold Pedrycz, Swagatam Das, Rammohan Mallipeddi, Oladayo Solomon Ajani. Qiang Feng
Reinforcement learning (RL) integrated as a component in the EA framework has demonstrated superior performance in recent years.
no code implementations • 1 Sep 2022 • Pourya Shamsolmoali, Masoumeh Zareapoor, Swagatam Das, Eric Granger, Salvador Garcia
Capsule networks (CapsNets) aim to parse images into a hierarchy of objects, parts, and their relations using a two-step process involving part-whole transformation and hierarchical component routing.
no code implementations • CVPR 2022 • Abhishek Kumar, Oladayo S. Ajani, Swagatam Das, Rammohan Mallipeddi
To address this issue, we propose a mode-seeking algorithm called GridShift, with significant speedup and principally based on MS. To accelerate, GridShift employs a grid-based approach for neighbor search, which is linear in the number of data points.
no code implementations • 8 May 2022 • Omatharv Bharat Vaidya, Rithvik Terence DSouza, Snehanshu Saha, Soma Dhavala, Swagatam Das
We introduce the Hamiltonian Monte Carlo Particle Swarm Optimizer (HMC-PSO), an optimization algorithm that reaps the benefits of both Exponentially Averaged Momentum PSO and HMC sampling.
1 code implementation • 7 Apr 2022 • Shounak Datta, Sankha Subhra Mullick, Anish Chakrabarty, Swagatam Das
We then use a novel strategy to artificially form new tasks for training by interpolating between the available tasks and their respective interval bounds.
no code implementations • 6 Jan 2022 • Saptarshi Chakraborty, Debolina Paul, Swagatam Das
The problem of linear predictions has been extensively studied for the past century under pretty generalized frameworks.
1 code implementation • NeurIPS 2021 • Debolina Paul, Saptarshi Chakraborty, Swagatam Das, Jason Xu
Recent advances in center-based clustering continue to improve upon the drawbacks of Lloyd's celebrated $k$-means algorithm over $60$ years after its introduction.
no code implementations • NeurIPS 2021 • Anish Chakrabarty, Swagatam Das
The introduction of Variational Autoencoders (VAE) has been marked as a breakthrough in the history of representation learning models.
no code implementations • 25 Apr 2021 • Sandipan Dhar, Nanda Dulal Jana, Swagatam Das
VC task is performed through a three-stage pipeline consisting of speech analysis, speech feature mapping, and speech reconstruction.
no code implementations • 5 Feb 2021 • Debolina Paul, Saptarshi Chakraborty, Swagatam Das
Principal Component Analysis (PCA) is a fundamental tool for data visualization, denoising, and dimensionality reduction.
1 code implementation • 20 Dec 2020 • Saptarshi Chakraborty, Debolina Paul, Swagatam Das
Mean shift is a simple interactive procedure that gradually shifts data points towards the mode which denotes the highest density of data points in the region.
no code implementations • 12 Nov 2020 • Debolina Paul, Saptarshi Chakraborty, Swagatam Das, Jason Xu
We show the method implicitly performs annealing in kernel feature space while retaining efficient, closed-form updates, and we rigorously characterize its convergence properties both from computational and statistical points of view.
no code implementations • 26 Aug 2020 • Sankha Subhra Mullick, Shounak Datta, Sourish Gunesh Dhekane, Swagatam Das
Indices quantifying the performance of classifiers under class-imbalance, often suffer from distortions depending on the constitution of the test set or the class-specific classification accuracy, creating difficulties in assessing the merit of the classifier.
no code implementations • 24 Apr 2020 • Arka Ghosh, Sankha Subhra Mullick, Shounak Datta, Swagatam Das, Rammohan Mallipeddi, Asit Kr. Das
Constructing adversarial perturbations for deep neural networks is an important direction of research.
1 code implementation • 10 Jan 2020 • Saptarshi Chakraborty, Debolina Paul, Swagatam Das, Jason Xu
Despite its well-known shortcomings, $k$-means remains one of the most widely used approaches to data clustering.
no code implementations • 19 Sep 2019 • Yi Chen, Aimin Zhou, Swagatam Das
Consequently, we point out one challenge, faced by a direct coding scheme for MOEAs, to this problem.
no code implementations • 24 Mar 2019 • Saptarshi Chakraborty, Swagatam Das
We propose the Lasso Weighted $k$-means ($LW$-$k$-means) algorithm as a simple yet efficient sparse clustering procedure for high-dimensional data where the number of features ($p$) can be much larger compared to the number of observations ($n$).
1 code implementation • ICCV 2019 • Sankha Subhra Mullick, Shounak Datta, Swagatam Das
We propose a three-player adversarial game between a convex generator, a multi-class classifier network, and a real/fake discriminator to perform oversampling in deep learning systems.
no code implementations • 9 Aug 2018 • Avisek Gupta, Shounak Datta, Swagatam Das
This paper presents Entropy $c$-Means (ECM), a method of fuzzy clustering that simultaneously optimizes two contradictory objective functions, resulting in the creation of fuzzy clusters with different levels of fuzziness.
1 code implementation • 22 Dec 2017 • Shounak Datta, Sayak Nag, Sankha Subhra Mullick, Swagatam Das
The diversification (generating slightly varying separating discriminators) of Support Vector Machines (SVMs) for boosting has proven to be a challenge due to the strong learning nature of SVMs.
1 code implementation • 31 Aug 2017 • Shounak Datta, Sayak Nag, Swagatam Das
We then demonstrate how this insight can be used to attain a good compromise between the rare and abundant classes without having to resort to cost set tuning, which has long been the norm for imbalanced classification.
no code implementations • 22 Apr 2016 • Shounak Datta, Supritam Bhattacharjee, Swagatam Das
Many real-world clustering problems are plagued by incomplete data characterized by missing or absent features for some or all of the data instances.
1 code implementation • 28 Mar 2016 • Satrajit Mukherjee, Bodhisattwa Prasad Majumder, Aritran Piplai, Swagatam Das
The paper proposes a novel Kernelized image segmentation scheme for noisy images that utilizes the concept of Smallest Univalue Segment Assimilating Nucleus (SUSAN) and incorporates spatial constraints by computing circular colour map induced weights.
no code implementations • 14 Oct 2014 • Debdipta Goswami, Chiranjib Saha, Kunal Pal, Swagatam Das
Principle of Swarm Intelligence has recently found widespread application in formation control and automated tracking by the automated multi-agent system.