You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 25 Apr 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Pawan Kumar, Junbin Gao

In this paper, we study the min-max optimization problems on Riemannian manifolds.

1 code implementation • 30 Jan 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao

Optimal transport (OT) has seen its popularity in various fields of applications.

1 code implementation • 20 Oct 2021 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao

This paper proposes a generalized Bures-Wasserstein (BW) Riemannian geometry for the manifold of symmetric positive definite matrices.

1 code implementation • NeurIPS 2021 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao

We build on this to show that the BW metric is a more suitable and robust choice for several Riemannian optimization problems over ill-conditioned SPD matrices.

no code implementations • 18 Mar 2021 • Karthik S. Gurumoorthy, Pratik Jawanpuria, Bamdev Mishra

In this work, we develop an optimal transport (OT) based framework to select informative prototypical examples that best represent a given target dataset.

1 code implementation • 1 Mar 2021 • Bamdev Mishra, N T V Satyadev, Hiroyuki Kasai, Pratik Jawanpuria

In this work, we discuss how to computationally approach general non-linear OT problems within the framework of Riemannian manifold optimization.

2 code implementations • 22 Oct 2020 • Pratik Jawanpuria, N T V Satyadev, Bamdev Mishra

Optimal transport (OT) is a powerful geometric tool for comparing two distributions and has been employed in various machine learning applications.

no code implementations • WS 2020 • Pratik Jawanpuria, N T V Satya Dev, Anoop Kunchukuttan, Bamdev Mishra

We propose a geometric framework for learning meta-embeddings of words from different embedding sources.

no code implementations • EMNLP 2020 • Pratik Jawanpuria, Mayank Meghwanshi, Bamdev Mishra

Recent progress on unsupervised learning of cross-lingual embeddings in bilingual setting has given impetus to learning a shared embedding space for several languages without any supervision.

no code implementations • ACL 2020 • Pratik Jawanpuria, Mayank Meghwanshi, Bamdev Mishra

We propose a novel manifold based geometric approach for learning unsupervised alignment of word embeddings between the source and the target languages.

no code implementations • 25 Jun 2019 • Bamdev Mishra, Hiroyuki Kasai, Pratik Jawanpuria

In this work, we generalize the probability simplex constraint to matrices, i. e., $\mathbf{X}_1 + \mathbf{X}_2 + \ldots + \mathbf{X}_K = \mathbf{I}$, where $\mathbf{X}_i \succeq 0$ is a symmetric positive semidefinite matrix of size $n\times n$ for all $i = \{1,\ldots, K \}$.

no code implementations • 15 May 2019 • Anil R. Yelundur, Vineet Chaoji, Bamdev Mishra

In this paper, our focus is on detecting such abusive entities (both sellers and reviewers) by applying tensor decomposition on the product reviews data.

no code implementations • 18 Mar 2019 • Pratik Jawanpuria, Mayank Meghwanshi, Bamdev Mishra

While the hyperbolic manifold is well-studied in the literature, it has gained interest in the machine learning and natural language processing communities lately due to its usefulness in modeling continuous hierarchies.

no code implementations • 11 Feb 2019 • Hiroyuki Kasai, Bamdev Mishra

Dictionary leaning (DL) and dimensionality reduction (DR) are powerful tools to analyze high-dimensional noisy signals.

1 code implementation • 4 Feb 2019 • Hiroyuki Kasai, Pratik Jawanpuria, Bamdev Mishra

We propose novel stochastic gradient algorithms for problems on Riemannian matrix manifolds by adapting the row and column subspaces of gradients.

1 code implementation • NeurIPS 2018 • Hiroyuki Kasai, Bamdev Mishra

We consider an inexact variant of the popular Riemannian trust-region algorithm for structured big-data minimization problems.

1 code implementation • 3 Oct 2018 • Mayank Meghwanshi, Pratik Jawanpuria, Anoop Kunchukuttan, Hiroyuki Kasai, Bamdev Mishra

In this paper, we introduce McTorch, a manifold optimization library for deep learning that extends PyTorch.

2 code implementations • TACL 2019 • Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, Bamdev Mishra

Our approach decouples learning the transformation from the source language to the target language into (a) learning rotations for language-specific embeddings to align them to a common space, and (b) learning a similarity metric in the common space to model similarities between the embeddings.

1 code implementation • ICML 2018 • Hiroyuki Kasai, Hiroyuki Sato, Bamdev Mishra

Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large, but finite number of loss functions on a Riemannian manifold.

1 code implementation • ICML 2018 • Pratik Jawanpuria, Bamdev Mishra

We consider the problem of learning a low-rank matrix, constrained to lie in a linear subspace, and introduce a novel factorization for modeling such matrices.

1 code implementation • 14 Jun 2018 • Mukul Bhutani, Pratik Jawanpuria, Hiroyuki Kasai, Bamdev Mishra

We propose a low-rank approach to learning a Mahalanobis metric from data.

1 code implementation • 28 Apr 2018 • Sridhar Mahadevan, Bamdev Mishra, Shalini Ghosh

We present a novel framework for domain adaptation, whereby both geometric and statistical differences between a labeled source domain and unlabeled target domain can be integrated by exploiting the curved Riemannian geometry of statistical manifolds.

no code implementations • 11 Apr 2018 • Anil R. Yelundur, Srinivasan H. Sengamedu, Bamdev Mishra

In addition, we use Polya-Gamma data augmentation for the semi-supervised Bayesian tensor decomposition.

no code implementations • 18 Feb 2018 • Madhav Nimishakavi, Bamdev Mishra, Manish Gupta, Partha Talukdar

Besides the tensors, in many real world scenarios, side information is also available in the form of matrices which also grow in size with time.

no code implementations • NeurIPS 2018 • Madhav Nimishakavi, Pratik Jawanpuria, Bamdev Mishra

One of the popular approaches for low-rank tensor completion is to use the latent trace norm regularization.

no code implementations • 21 Nov 2017 • Mukul Bhutani, Bamdev Mishra

The problem of matrix completion especially uses it to decompose a sparse matrix into two non sparse, low rank matrices which can then be used to predict unknown entries of the original matrix.

no code implementations • 1 May 2017 • Bamdev Mishra, Hiroyuki Kasai, Pratik Jawanpuria, Atul Saroop

Interesting applications in this setting include low-rank matrix completion and low-dimensional multivariate regression, among others.

no code implementations • 24 Apr 2017 • Pratik Jawanpuria, Bamdev Mishra

We consider the problem of learning a low-rank matrix, constrained to lie in a linear subspace, and introduce a novel factorization for modeling such matrices.

no code implementations • 15 Mar 2017 • Hiroyuki Kasai, Hiroyuki Sato, Bamdev Mishra

The present paper proposes a Riemannian stochastic quasi-Newton algorithm with variance reduction (R-SQN-VR).

1 code implementation • 18 Feb 2017 • Hiroyuki Sato, Hiroyuki Kasai, Bamdev Mishra

In recent years, stochastic variance reduction algorithms have attracted considerable attention for minimizing the average of a large but finite number of loss functions.

no code implementations • 26 May 2016 • Hiroyuki Kasai, Bamdev Mishra

We propose a novel Riemannian manifold preconditioning approach for the tensor completion problem with rank constraint.

1 code implementation • 24 May 2016 • Hiroyuki Kasai, Hiroyuki Sato, Bamdev Mishra

In this paper, we propose a novel Riemannian extension of the Euclidean stochastic variance reduced gradient algorithm (R-SVRG) to a compact manifold search space.

no code implementations • 23 May 2016 • Bamdev Mishra, Hiroyuki Kasai, Atul Saroop

In this paper, we propose novel gossip algorithms for the low-rank decentralized matrix completion problem.

no code implementations • 16 Mar 2016 • Bamdev Mishra, Rodolphe Sepulchre

The paper looks at a scaled variant of the stochastic gradient descent algorithm for the matrix completion problem.

no code implementations • 5 Nov 2015 • Vijay Badrinarayanan, Bamdev Mishra, Roberto Cipolla

Recent works have highlighted scale invariance or symmetry that is present in the weight space of a typical deep network and the adverse effect that it has on the Euclidean gradient based stochastic gradient descent optimization.

no code implementations • 3 Nov 2015 • Vijay Badrinarayanan, Bamdev Mishra, Roberto Cipolla

Consequently, training the network boils down to using stochastic gradient descent updates on the unit-norm manifold.

no code implementations • 6 Jun 2015 • Hiroyuki Kasai, Bamdev Mishra

We propose a novel Riemannian preconditioning approach for the tensor completion problem with rank constraint.

no code implementations • 7 Apr 2015 • Yanfeng Sun, Junbin Gao, Xia Hong, Bamdev Mishra, Bao-Cai Yin

In contrast to existing techniques, we propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model.

no code implementations • 30 Mar 2015 • Raphaël Liégeois, Bamdev Mishra, Mattia Zorzi, Rodolphe Sepulchre

This paper considers the problem of identifying multivariate autoregressive (AR) sparse plus low-rank graphical models.

no code implementations • 23 Aug 2013 • Nicolas Boumal, Bamdev Mishra, P. -A. Absil, Rodolphe Sepulchre

Optimization on manifolds is a rapidly developing branch of nonlinear optimization.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.