no code implementations • ECCV 2020 • Titir Dutta, Anurag Singh, Soma Biswas
Extensive experiments and analysis justifies the effectiveness of the proposed AMDReg for mitigating the effect of data imbalance for generalization to unseen classes in ZS-SBIR.
no code implementations • 27 Nov 2023 • Debarshi Brahma, Amartya Bhattacharya, Suraj Nagaje Mahadev, Anmol Asati, Vikas Verma, Soma Biswas
In this work, we explore whether out-of-domain data can help to improve out-of-context misinformation detection (termed here as multi-modal fake news detection) of a desired domain, to address this challenging problem.
1 code implementation • 2 Nov 2023 • Jayateja Kalla, Soma Biswas
This paper introduces a two-stage framework designed to enhance long-tail class incremental learning, enabling the model to progressively learn new classes, while mitigating catastrophic forgetting in the context of long-tailed data distributions.
no code implementations • 2 Sep 2023 • Manogna Sreenivas, Goirik Chakrabarty, Soma Biswas
This method draws inspiration from target clustering techniques and exploits the source classifier for generating pseudo-source samples.
2 code implementations • ICCV 2023 • Subhadeep Roy, Shankhanil Mitra, Soma Biswas, Rajiv Soundararajan
In this work, we introduce two novel quality-relevant auxiliary tasks at the batch and sample levels to enable TTA for blind IQA.
1 code implementation • 5 Jul 2023 • Jayateja Kalla, Soma Biswas
Few-shot class-incremental learning (FSCIL) aims to learn progressively about new classes with very few labeled samples, without forgetting the knowledge of already learnt classes.
no code implementations • 20 Apr 2023 • Goirik Chakrabarty, Manogna Sreenivas, Soma Biswas
Adapting a trained model to perform satisfactorily on continually changing testing domains/environments is an important and challenging task.
1 code implementation • 22 Nov 2022 • Megh Manoj Bhalerao, Anurag Singh, Soma Biswas
Here, we propose a novel framework, Pred&Guide, which leverages the inconsistency between the predicted and the actual class labels of the few labeled target examples to effectively guide the domain adaptation in a semi-supervised setting.
Semi-supervised Domain Adaptation Unsupervised Domain Adaptation
1 code implementation • 19 Aug 2022 • Soumava Paul, Titir Dutta, Aheli Saha, Abhishek Samanta, Soma Biswas
Image retrieval under generalized test scenarios has gained significant momentum in literature, and the recently proposed protocol of Universal Cross-domain Retrieval is a pioneer in this direction.
no code implementations • 21 Jul 2022 • K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian
Inspired by this, we identify and formulate a new, pragmatic problem setting of NCDwF: Novel Class Discovery without Forgetting, which tasks a machine learning model to incrementally discover novel categories of instances from unlabeled data, while maintaining its performance on the previously seen categories.
1 code implementation • 22 Apr 2022 • K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian
Novel Class Discovery (NCD) is a learning paradigm, where a machine learning model is tasked to semantically group instances from unlabeled data, by utilizing labeled instances from a disjoint set of classes.
no code implementations • 4 Dec 2021 • Ansh Khurana, Sujoy Paul, Piyush Rai, Soma Biswas, Gaurav Aggarwal
In Test-time Adaptation (TTA), given a source model, the goal is to adapt it to make better predictions for test instances from a different distribution than the source.
2 code implementations • ICCV 2021 • Soumava Paul, Titir Dutta, Soma Biswas
Towards that goal, we propose SnMpNet (Semantic Neighbourhood and Mixture Prediction Network), which incorporates two novel losses to account for the unseen classes and domains encountered during testing.
no code implementations • 14 Sep 2020 • Ayyappa Kumar Pambala, Titir Dutta, Soma Biswas
In addition, we propose to use the well established technique, ridge regression, to not only bring in the class-level semantic information, but also to effectively utilise the information available from multiple images present in the training data for prototype computation.
no code implementations • 3 Feb 2020 • Devraj Mandal, Soma Biswas
For the second stage, we propose both a non-deep and deep architectures to learn the hash functions effectively.
no code implementations • 13 Oct 2019 • Devraj Mandal, Shrisha Bharadwaj, Soma Biswas
The major driving force behind the immense success of deep learning models is the availability of large datasets along with their clean labels.
no code implementations • 27 May 2019 • Devraj Mandal, Pramod Rao, Soma Biswas
In this work, we propose a novel framework in a semi-supervised setting, which can predict the labels of the unlabeled data using complementary information from different modalities.
no code implementations • 11 May 2019 • Ayyappa Kumar Pambala, Titir Dutta, Soma Biswas
Generative models have achieved state-of-the-art performance for the zero-shot learning problem, but they require re-training the classifier every time a new object category is encountered.
no code implementations • 11 May 2019 • Supritam Bhattacharjee, Devraj Mandal, Soma Biswas
Our model which is trained to reveal the constituent classes can then be used to determine whether the sample is novel or not.
no code implementations • 4 Dec 2018 • Devraj Mandal, Pramod Rao, Soma Biswas
Due to abundance of data from multiple modalities, cross-modal retrieval tasks with image-text, audio-image, etc.
no code implementations • CVPR 2018 • Yashas Annadani, Soma Biswas
We devise objective functions to preserve these relations in the embedding space, thereby inducing semanticity to the embedding space.
no code implementations • CVPR 2017 • Devraj Mandal, Kunal. N. Chaudhury, Soma Biswas
Different scenarios of cross-modal matching are possible, for example, data from the different modalities can be associated with a single label or multiple labels, and in addition may or may not have one-to-one correspondence.
no code implementations • 1 Nov 2016 • Yashas Annadani, D L Rakshith, Soma Biswas
This is used to compute the sparse coefficients of the input action sequence which is divided into overlapping windows and each window gives a probability score for each action class.
no code implementations • ICCV 2015 • Soubhik Sanyal, Sivaram Prasad Mudunuri, Soma Biswas
The DPFD of images taken from different viewpoints can be directly compared for matching.
no code implementations • 22 Sep 2014 • Adway Mitra, Soma Biswas, Chiranjib Bhattacharyya
The task of \emph{Entity Discovery} in videos can be naturally posed as tracklet clustering.