Search Results for author: Sadeep Jayasumana

Found 21 papers, 5 papers with code

Rethinking FID: Towards a Better Evaluation Metric for Image Generation

3 code implementations CVPR 2024 Sadeep Jayasumana, Srikumar Ramalingam, Andreas Veit, Daniel Glasner, Ayan Chakrabarti, Sanjiv Kumar

It is an unbiased estimator that does not make any assumptions on the probability distribution of the embeddings and is sample efficient.

Image Generation

When does mixup promote local linearity in learned representations?

no code implementations28 Oct 2022 Arslan Chaudhry, Aditya Krishna Menon, Andreas Veit, Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar

Towards this, we study two questions: (1) how does the Mixup loss that enforces linearity in the \emph{last} network layer propagate the linearity to the \emph{earlier} layers?

Representation Learning

Less data is more: Selecting informative and diverse subsets with balancing constraints

no code implementations29 Sep 2021 Srikumar Ramalingam, Daniel Glasner, Kaushal Patel, Raviteja Vemulapalli, Sadeep Jayasumana, Sanjiv Kumar

Deep learning has yielded extraordinary results in vision and natural language processing, but this achievement comes at a cost.


In defense of dual-encoders for neural ranking

no code implementations29 Sep 2021 Aditya Krishna Menon, Sadeep Jayasumana, Seungyeon Kim, Ankit Singh Rawat, Sashank J. Reddi, Sanjiv Kumar

Transformer-based models such as BERT have proven successful in information retrieval problem, which seek to identify relevant documents for a given query.

Information Retrieval Natural Questions +1

Model-Efficient Deep Learning with Kernelized Classification

no code implementations29 Sep 2021 Sadeep Jayasumana, Srikumar Ramalingam, Sanjiv Kumar

We investigate the possibility of using the embeddings produced by a lightweight network more effectively with a nonlinear classification layer.


Disentangling Sampling and Labeling Bias for Learning in Large-Output Spaces

no code implementations12 May 2021 Ankit Singh Rawat, Aditya Krishna Menon, Wittawat Jitkrittum, Sadeep Jayasumana, Felix X. Yu, Sashank Reddi, Sanjiv Kumar

Negative sampling schemes enable efficient training given a large number of classes, by offering a means to approximate a computationally expensive loss function that takes all labels into account.


Long-tail learning via logit adjustment

3 code implementations ICLR 2021 Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, Sanjiv Kumar

Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels are associated with only a few samples.

Long-tail Learning

Bipartite Conditional Random Fields for Panoptic Segmentation

1 code implementation11 Dec 2019 Sadeep Jayasumana, Kanchana Ranasinghe, Mayuka Jayawardhana, Sahan Liyanaarachchi, Harsha Ranasinghe

To tackle this problem, we propose a CRF model, named Bipartite CRF or BCRF, with two types of random variables for semantic and instance labels.

Panoptic Segmentation Segmentation

Prototypical Priors: From Improving Classification to Zero-Shot Learning

no code implementations3 Dec 2015 Saumya Jetley, Bernardino Romera-Paredes, Sadeep Jayasumana, Philip Torr

Recent works on zero-shot learning make use of side information such as visual attributes or natural language semantics to define the relations between output visual classes and then use these relationships to draw inference on new unseen classes at test time.

Classification General Classification +1

Higher Order Conditional Random Fields in Deep Neural Networks

1 code implementation25 Nov 2015 Anurag Arnab, Sadeep Jayasumana, Shuai Zheng, Philip Torr

Recent deep learning approaches have incorporated CRFs into Convolutional Neural Networks (CNNs), with some even training the CRF end-to-end with the rest of the network.

Segmentation Semantic Segmentation +1

Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices

no code implementations CVPR 2013 Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi

To encode the geometry of the manifold in the mapping, we introduce a family of provably positive definite kernels on the Riemannian manifold of SPD matrices.

Motion Segmentation Pedestrian Detection +2

Optimizing Over Radial Kernels on Compact Manifolds

no code implementations CVPR 2014 Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi

We tackle the problem of optimizing over all possible positive definite radial kernels on Riemannian manifolds for classification.

General Classification

A Framework for Shape Analysis via Hilbert Space Embedding

no code implementations13 Dec 2014 Sadeep Jayasumana, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi

We propose a framework for 2D shape analysis using positive definite kernels defined on Kendall's shape manifold.

Clustering General Classification +1

Kernel Methods on Riemannian Manifolds with Gaussian RBF Kernels

no code implementations30 Nov 2014 Sadeep Jayasumana, Richard Hartley, Mathieu Salzmann, Hongdong Li, Mehrtash Harandi

We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i. e., the Riemannian manifold of linear subspaces of a Euclidean space.

Expanding the Family of Grassmannian Kernels: An Embedding Perspective

no code implementations4 Jul 2014 Mehrtash T. Harandi, Mathieu Salzmann, Sadeep Jayasumana, Richard Hartley, Hongdong Li

Modeling videos and image-sets as linear subspaces has proven beneficial for many visual recognition tasks.


Cannot find the paper you are looking for? You can Submit a new open access paper.