2 code implementations • 30 Nov 2023 • Sadeep Jayasumana, Srikumar Ramalingam, Andreas Veit, Daniel Glasner, Ayan Chakrabarti, Sanjiv Kumar
It is an unbiased estimator that does not make any assumptions on the probability distribution of the embeddings and is sample efficient.
no code implementations • 14 Aug 2023 • Sadeep Jayasumana, Daniel Glasner, Srikumar Ramalingam, Andreas Veit, Ayan Chakrabarti, Sanjiv Kumar
Modern text-to-image generation models produce high-quality images that are both photorealistic and faithful to the text prompts.
no code implementations • 27 Jan 2023 • Seungyeon Kim, Ankit Singh Rawat, Manzil Zaheer, Sadeep Jayasumana, Veeranjaneyulu Sadhanala, Wittawat Jitkrittum, Aditya Krishna Menon, Rob Fergus, Sanjiv Kumar
Large neural models (such as Transformers) achieve state-of-the-art performance for information retrieval (IR).
no code implementations • 28 Oct 2022 • Arslan Chaudhry, Aditya Krishna Menon, Andreas Veit, Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar
Towards this, we study two questions: (1) how does the Mixup loss that enforces linearity in the \emph{last} network layer propagate the linearity to the \emph{earlier} layers?
no code implementations • 29 Sep 2021 • Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar
We investigate the possibility of using the embeddings produced by a lightweight network more effectively with a nonlinear classification layer.
no code implementations • 29 Sep 2021 • Aditya Krishna Menon, Sadeep Jayasumana, Seungyeon Kim, Ankit Singh Rawat, Sashank J. Reddi, Sanjiv Kumar
Transformer-based models such as BERT have proven successful in information retrieval problem, which seek to identify relevant documents for a given query.
no code implementations • 29 Sep 2021 • Srikumar Ramalingam, Daniel Glasner, Kaushal Patel, Raviteja Vemulapalli, Sadeep Jayasumana, Sanjiv Kumar
Deep learning has yielded extraordinary results in vision and natural language processing, but this achievement comes at a cost.
no code implementations • 12 May 2021 • Ankit Singh Rawat, Aditya Krishna Menon, Wittawat Jitkrittum, Sadeep Jayasumana, Felix X. Yu, Sashank Reddi, Sanjiv Kumar
Negative sampling schemes enable efficient training given a large number of classes, by offering a means to approximate a computationally expensive loss function that takes all labels into account.
no code implementations • 26 Apr 2021 • Srikumar Ramalingam, Daniel Glasner, Kaushal Patel, Raviteja Vemulapalli, Sadeep Jayasumana, Sanjiv Kumar
Deep learning has yielded extraordinary results in vision and natural language processing, but this achievement comes at a cost.
no code implementations • 8 Dec 2020 • Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar
We propose a kernelized classification layer for deep networks.
3 code implementations • ICLR 2021 • Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, Sanjiv Kumar
Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels are associated with only a few samples.
Ranked #49 on Long-tail Learning on ImageNet-LT
1 code implementation • 11 Dec 2019 • Sadeep Jayasumana, Kanchana Ranasinghe, Mayuka Jayawardhana, Sahan Liyanaarachchi, Harsha Ranasinghe
To tackle this problem, we propose a CRF model, named Bipartite CRF or BCRF, with two types of random variables for semantic and instance labels.
no code implementations • 3 Dec 2015 • Saumya Jetley, Bernardino Romera-Paredes, Sadeep Jayasumana, Philip Torr
Recent works on zero-shot learning make use of side information such as visual attributes or natural language semantics to define the relations between output visual classes and then use these relationships to draw inference on new unseen classes at test time.
1 code implementation • 25 Nov 2015 • Anurag Arnab, Sadeep Jayasumana, Shuai Zheng, Philip Torr
Recent deep learning approaches have incorporated CRFs into Convolutional Neural Networks (CNNs), with some even training the CRF end-to-end with the rest of the network.
Ranked #56 on Semantic Segmentation on PASCAL Context
6 code implementations • ICCV 2015 • Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr
Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding.
Ranked #36 on Semantic Segmentation on PASCAL VOC 2012 test
no code implementations • CVPR 2014 • Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi
We tackle the problem of optimizing over all possible positive definite radial kernels on Riemannian manifolds for classification.
no code implementations • CVPR 2013 • Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi
To encode the geometry of the manifold in the mapping, we introduce a family of provably positive definite kernels on the Riemannian manifold of SPD matrices.
no code implementations • 13 Dec 2014 • Sadeep Jayasumana, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi
We propose a framework for 2D shape analysis using positive definite kernels defined on Kendall's shape manifold.
no code implementations • 30 Nov 2014 • Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi
We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i. e., the Riemannian manifold of linear subspaces of a Euclidean space.
no code implementations • 4 Jul 2014 • Mehrtash T. Harandi, Mathieu Salzmann, Sadeep Jayasumana, Richard Hartley, Hongdong Li
Modeling videos and image-sets as linear subspaces has proven beneficial for many visual recognition tasks.