no code implementations • 3 Feb 2020 • Devraj Mandal, Soma Biswas
For the second stage, we propose both a non-deep and deep architectures to learn the hash functions effectively.
no code implementations • 13 Oct 2019 • Devraj Mandal, Shrisha Bharadwaj, Soma Biswas
The major driving force behind the immense success of deep learning models is the availability of large datasets along with their clean labels.
no code implementations • 27 May 2019 • Devraj Mandal, Pramod Rao, Soma Biswas
In this work, we propose a novel framework in a semi-supervised setting, which can predict the labels of the unlabeled data using complementary information from different modalities.
no code implementations • 11 May 2019 • Supritam Bhattacharjee, Devraj Mandal, Soma Biswas
Our model which is trained to reveal the constituent classes can then be used to determine whether the sample is novel or not.
1 code implementation • CVPR 2019 • Devraj Mandal, Sanath Narayan, Saikumar Dwivedi, Vikram Gupta, Shuaib Ahmed, Fahad Shahbaz Khan, Ling Shao
We introduce an out-of-distribution detector that determines whether the video features belong to a seen or unseen action category.
Action Recognition In Videos Out-of-Distribution Detection +2
no code implementations • 4 Dec 2018 • Devraj Mandal, Pramod Rao, Soma Biswas
Due to abundance of data from multiple modalities, cross-modal retrieval tasks with image-text, audio-image, etc.
no code implementations • CVPR 2017 • Devraj Mandal, Kunal. N. Chaudhury, Soma Biswas
Different scenarios of cross-modal matching are possible, for example, data from the different modalities can be associated with a single label or multiple labels, and in addition may or may not have one-to-one correspondence.