no code implementations • 23 Aug 2013 • Nicolas Boumal, Bamdev Mishra, P. -A. Absil, Rodolphe Sepulchre
Optimization on manifolds is a rapidly developing branch of nonlinear optimization.
no code implementations • 30 Mar 2015 • Raphaël Liégeois, Bamdev Mishra, Mattia Zorzi, Rodolphe Sepulchre
This paper considers the problem of identifying multivariate autoregressive (AR) sparse plus low-rank graphical models.
no code implementations • 7 Apr 2015 • Yanfeng Sun, Junbin Gao, Xia Hong, Bamdev Mishra, Bao-Cai Yin
In contrast to existing techniques, we propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model.
no code implementations • 6 Jun 2015 • Hiroyuki Kasai, Bamdev Mishra
We propose a novel Riemannian preconditioning approach for the tensor completion problem with rank constraint.
no code implementations • 3 Nov 2015 • Vijay Badrinarayanan, Bamdev Mishra, Roberto Cipolla
Consequently, training the network boils down to using stochastic gradient descent updates on the unit-norm manifold.
no code implementations • 5 Nov 2015 • Vijay Badrinarayanan, Bamdev Mishra, Roberto Cipolla
Recent works have highlighted scale invariance or symmetry that is present in the weight space of a typical deep network and the adverse effect that it has on the Euclidean gradient based stochastic gradient descent optimization.
no code implementations • 16 Mar 2016 • Bamdev Mishra, Rodolphe Sepulchre
The paper looks at a scaled variant of the stochastic gradient descent algorithm for the matrix completion problem.
no code implementations • 23 May 2016 • Bamdev Mishra, Hiroyuki Kasai, Atul Saroop
In this paper, we propose novel gossip algorithms for the low-rank decentralized matrix completion problem.
1 code implementation • 24 May 2016 • Hiroyuki Kasai, Hiroyuki Sato, Bamdev Mishra
In this paper, we propose a novel Riemannian extension of the Euclidean stochastic variance reduced gradient algorithm (R-SVRG) to a compact manifold search space.
no code implementations • 26 May 2016 • Hiroyuki Kasai, Bamdev Mishra
We propose a novel Riemannian manifold preconditioning approach for the tensor completion problem with rank constraint.
1 code implementation • 18 Feb 2017 • Hiroyuki Sato, Hiroyuki Kasai, Bamdev Mishra
In recent years, stochastic variance reduction algorithms have attracted considerable attention for minimizing the average of a large but finite number of loss functions.
no code implementations • 15 Mar 2017 • Hiroyuki Kasai, Hiroyuki Sato, Bamdev Mishra
The present paper proposes a Riemannian stochastic quasi-Newton algorithm with variance reduction (R-SQN-VR).
no code implementations • 24 Apr 2017 • Pratik Jawanpuria, Bamdev Mishra
We consider the problem of learning a low-rank matrix, constrained to lie in a linear subspace, and introduce a novel factorization for modeling such matrices.
no code implementations • 1 May 2017 • Bamdev Mishra, Hiroyuki Kasai, Pratik Jawanpuria, Atul Saroop
Interesting applications in this setting include low-rank matrix completion and low-dimensional multivariate regression, among others.
no code implementations • 21 Nov 2017 • Mukul Bhutani, Bamdev Mishra
The problem of matrix completion especially uses it to decompose a sparse matrix into two non sparse, low rank matrices which can then be used to predict unknown entries of the original matrix.
no code implementations • NeurIPS 2018 • Madhav Nimishakavi, Pratik Jawanpuria, Bamdev Mishra
One of the popular approaches for low-rank tensor completion is to use the latent trace norm regularization.
no code implementations • 18 Feb 2018 • Madhav Nimishakavi, Bamdev Mishra, Manish Gupta, Partha Talukdar
Besides the tensors, in many real world scenarios, side information is also available in the form of matrices which also grow in size with time.
no code implementations • 11 Apr 2018 • Anil R. Yelundur, Srinivasan H. Sengamedu, Bamdev Mishra
In addition, we use Polya-Gamma data augmentation for the semi-supervised Bayesian tensor decomposition.
1 code implementation • 28 Apr 2018 • Sridhar Mahadevan, Bamdev Mishra, Shalini Ghosh
We present a novel framework for domain adaptation, whereby both geometric and statistical differences between a labeled source domain and unlabeled target domain can be integrated by exploiting the curved Riemannian geometry of statistical manifolds.
1 code implementation • 14 Jun 2018 • Mukul Bhutani, Pratik Jawanpuria, Hiroyuki Kasai, Bamdev Mishra
We propose a low-rank approach to learning a Mahalanobis metric from data.
1 code implementation • ICML 2018 • Pratik Jawanpuria, Bamdev Mishra
We consider the problem of learning a low-rank matrix, constrained to lie in a linear subspace, and introduce a novel factorization for modeling such matrices.
1 code implementation • ICML 2018 • Hiroyuki Kasai, Hiroyuki Sato, Bamdev Mishra
Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large, but finite number of loss functions on a Riemannian manifold.
2 code implementations • TACL 2019 • Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, Bamdev Mishra
Our approach decouples learning the transformation from the source language to the target language into (a) learning rotations for language-specific embeddings to align them to a common space, and (b) learning a similarity metric in the common space to model similarities between the embeddings.
1 code implementation • 3 Oct 2018 • Mayank Meghwanshi, Pratik Jawanpuria, Anoop Kunchukuttan, Hiroyuki Kasai, Bamdev Mishra
In this paper, we introduce McTorch, a manifold optimization library for deep learning that extends PyTorch.
1 code implementation • NeurIPS 2018 • Hiroyuki Kasai, Bamdev Mishra
We consider an inexact variant of the popular Riemannian trust-region algorithm for structured big-data minimization problems.
1 code implementation • 4 Feb 2019 • Hiroyuki Kasai, Pratik Jawanpuria, Bamdev Mishra
We propose novel stochastic gradient algorithms for problems on Riemannian matrix manifolds by adapting the row and column subspaces of gradients.
no code implementations • 11 Feb 2019 • Hiroyuki Kasai, Bamdev Mishra
Dictionary leaning (DL) and dimensionality reduction (DR) are powerful tools to analyze high-dimensional noisy signals.
no code implementations • 18 Mar 2019 • Pratik Jawanpuria, Mayank Meghwanshi, Bamdev Mishra
While the hyperbolic manifold is well-studied in the literature, it has gained interest in the machine learning and natural language processing communities lately due to its usefulness in modeling continuous hierarchies.
no code implementations • 15 May 2019 • Anil R. Yelundur, Vineet Chaoji, Bamdev Mishra
In this paper, our focus is on detecting such abusive entities (both sellers and reviewers) by applying tensor decomposition on the product reviews data.
no code implementations • 25 Jun 2019 • Bamdev Mishra, Hiroyuki Kasai, Pratik Jawanpuria
In this work, we generalize the probability simplex constraint to matrices, i. e., $\mathbf{X}_1 + \mathbf{X}_2 + \ldots + \mathbf{X}_K = \mathbf{I}$, where $\mathbf{X}_i \succeq 0$ is a symmetric positive semidefinite matrix of size $n\times n$ for all $i = \{1,\ldots, K \}$.
no code implementations • ACL 2020 • Pratik Jawanpuria, Mayank Meghwanshi, Bamdev Mishra
We propose a novel manifold based geometric approach for learning unsupervised alignment of word embeddings between the source and the target languages.
no code implementations • EMNLP 2020 • Pratik Jawanpuria, Mayank Meghwanshi, Bamdev Mishra
Recent progress on unsupervised learning of cross-lingual embeddings in bilingual setting has given impetus to learning a shared embedding space for several languages without any supervision.
no code implementations • WS 2020 • Pratik Jawanpuria, N T V Satya Dev, Anoop Kunchukuttan, Bamdev Mishra
We propose a geometric framework for learning meta-embeddings of words from different embedding sources.
2 code implementations • 22 Oct 2020 • Pratik Jawanpuria, N T V Satyadev, Bamdev Mishra
Optimal transport (OT) is a powerful geometric tool for comparing two distributions and has been employed in various machine learning applications.
1 code implementation • 1 Mar 2021 • Bamdev Mishra, N T V Satyadev, Hiroyuki Kasai, Pratik Jawanpuria
In this work, we discuss how to computationally approach general non-linear OT problems within the framework of Riemannian manifold optimization.
no code implementations • 18 Mar 2021 • Karthik S. Gurumoorthy, Pratik Jawanpuria, Bamdev Mishra
In this work, we develop an optimal transport (OT) based framework to select informative prototypical examples that best represent a given target dataset.
1 code implementation • NeurIPS 2021 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
We build on this to show that the BW metric is a more suitable and robust choice for several Riemannian optimization problems over ill-conditioned SPD matrices.
1 code implementation • 20 Oct 2021 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
Learning with symmetric positive definite (SPD) matrices has many applications in machine learning.
1 code implementation • 30 Jan 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
In this work, we study the optimal transport (OT) problem between symmetric positive definite (SPD) matrix-valued measures.
no code implementations • 25 Apr 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Pawan Kumar, Junbin Gao
In this paper, we study min-max optimization problems on Riemannian manifolds.
no code implementations • 19 May 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
We introduce a framework of differentially private Riemannian optimization by adding noise to the Riemannian gradient on the tangent space.
no code implementations • 13 Aug 2022 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
In this paper, we propose a simple acceleration scheme for Riemannian gradient methods by extrapolating iterates on manifolds.
no code implementations • 4 Oct 2022 • Arghya Roy Chaudhuri, Pratik Jawanpuria, Bamdev Mishra
In this work, we propose a multi-armed bandit-based framework for identifying a compact set of informative data instances (i. e., the prototypes) from a source dataset $S$ that best represents a given target set $T$.
1 code implementation • 10 Oct 2022 • Saiteja Utpala, Andi Han, Pratik Jawanpuria, Bamdev Mishra
We present Rieoptax, an open source Python library for Riemannian optimization in JAX.
no code implementations • 30 Nov 2022 • Souvik Banerjee, Bamdev Mishra, Pratik Jawanpuria, Manish Shrivastava
The proposed modelling and the novel similarity metric exploits the matrix structure of embeddings.
1 code implementation • 20 Apr 2023 • Istasis Mishra, Arpan Dasgupta, Pratik Jawanpuria, Bamdev Mishra, Pawan Kumar
Extreme multi-label (XML) classification refers to the task of supervised multi-label learning that involves a large number of labels.
no code implementations • 6 Feb 2024 • Andi Han, Bamdev Mishra, Pratik Jawanpuria, Akiko Takeda
We provide convergence and complexity analysis for the proposed hypergradient descent algorithm on manifolds.
2 code implementations • 10 Apr 2024 • Neel Mishra, Bamdev Mishra, Pratik Jawanpuria, Pawan Kumar
It modifies the Gauss-Newton method to approximate the min-max Hessian and uses the Sherman-Morrison inversion formula to calculate the inverse.
no code implementations • 15 Apr 2024 • Zhenwei Huang, Wen Huang, Pratik Jawanpuria, Bamdev Mishra
To the best of our knowledge, this is the first federated learning framework on Riemannian manifold with a privacy guarantee and convergence results.