Search Results for author: Shashanka Ubaru

Found 18 papers, 2 papers with code

Low rank approximation and decomposition of large matrices using error correcting codes

no code implementations30 Dec 2015 Shashanka Ubaru, Arya Mazumdar, Yousef Saad

In this paper, we show how matrices from error correcting codes can be used to find such low rank approximations and matrix decompositions, and extend the framework to linear least squares regression problems.

regression

Fast estimation of approximate matrix ranks using spectral densities

no code implementations19 Aug 2016 Shashanka Ubaru, Yousef Saad, Abd-Krim Seghouane

In this paper, we present two computationally inexpensive techniques to estimate the approximate ranks of such large matrices.

Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness

no code implementations13 Apr 2017 Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff

We thus effectively compute a histogram of the spectrum, which can stand in for the true singular values in many applications.

Multilabel Classification with Group Testing and Codes

no code implementations ICML 2017 Shashanka Ubaru, Arya Mazumdar

In this work, we propose a novel approach based on group testing to solve such large multilabel classification problems with sparse label vectors.

Classification General Classification

Sampling and multilevel coarsening algorithms for fast matrix approximations

no code implementations1 Nov 2017 Shashanka Ubaru, Yousef Saad

To tackle these problems, we consider algorithms that are based primarily on coarsening techniques, possibly combined with random sampling.

Dimensionality Reduction

Provably convergent acceleration in factored gradient descent with applications in matrix sensing

no code implementations1 Jun 2018 Tayo Ajayi, David Mildebrath, Anastasios Kyrillidis, Shashanka Ubaru, Georgios Kollias, Kristofer Bouchard

We present theoretical results on the convergence of \emph{non-convex} accelerated gradient descent in matrix factorization models with $\ell_2$-norm loss.

Quantum State Tomography

Find the dimension that counts: Fast dimension estimation and Krylov PCA

no code implementations8 Oct 2018 Shashanka Ubaru, Abd-Krim Seghouane, Yousef Saad

In this paper, we consider the problem of simultaneously estimating the dimension of the principal (dominant) subspace of these covariance matrices and obtaining an approximation to the subspace.

Model Selection

Unsupervised Hierarchical Graph Representation Learning with Variational Bayes

no code implementations25 Sep 2019 Shashanka Ubaru, Jie Chen

These approaches are supervised; a predictive task with ground-truth labels is used to drive the learning.

Graph Classification Graph Representation Learning

Dynamic Graph Convolutional Networks Using the Tensor M-Product

1 code implementation ICLR 2020 Osman Asif Malik, Shashanka Ubaru, Lior Horesh, Misha E. Kilmer, Haim Avron

In recent years, a variety of graph neural networks (GNNs) have been successfully applied for representation learning and prediction on such graphs.

Edge Classification Link Prediction +1

Multilabel Classification by Hierarchical Partitioning and Data-dependent Grouping

1 code implementation NeurIPS 2020 Shashanka Ubaru, Sanjeeb Dash, Arya Mazumdar, Oktay Gunluk

We then present a hierarchical partitioning approach that exploits the label hierarchy in large scale problems to divide up the large label space and create smaller sub-problems, which can then be solved independently via the grouping approach.

Classification General Classification

Dynamic graph and polynomial chaos based models for contact tracing data analysis and optimal testing prescription

no code implementations10 Sep 2020 Shashanka Ubaru, Lior Horesh, Guy Cohen

Thus, estimation of state uncertainty is paramount for both eminent risk assessment, as well as for closing the tracing-testing loop by optimal testing prescription.

Uncertainty Quantification

Projection techniques to update the truncated SVD of evolving matrices

no code implementations13 Oct 2020 Vassilis Kalantzis, Georgios Kollias, Shashanka Ubaru, Athanasios N. Nikolakopoulos, Lior Horesh, Kenneth L. Clarkson

This paper considers the problem of updating the rank-k truncated Singular Value Decomposition (SVD) of matrices subject to the addition of new rows and/or columns over time.

Recommendation Systems

Quantum Topological Data Analysis with Linear Depth and Exponential Speedup

no code implementations5 Aug 2021 Shashanka Ubaru, Ismail Yunus Akhalwaya, Mark S. Squillante, Kenneth L. Clarkson, Lior Horesh

In this paper, we completely overhaul the QTDA algorithm to achieve an improved exponential speedup and depth complexity of $O(n\log(1/(\delta\epsilon)))$.

Quantum Machine Learning Topological Data Analysis

Efficient Scaling of Dynamic Graph Neural Networks

no code implementations16 Sep 2021 Venkatesan T. Chakaravarthy, Shivmaran S. Pandian, Saurabh Raje, Yogish Sabharwal, Toyotaro Suzumura, Shashanka Ubaru

We present distributed algorithms for training dynamic Graph Neural Networks (GNN) on large scale graphs spanning multi-node, multi-GPU systems.

PCENet: High Dimensional Surrogate Modeling for Learning Uncertainty

no code implementations10 Feb 2022 Paz Fink Shustin, Shashanka Ubaru, Vasileios Kalantzis, Lior Horesh, Haim Avron

In this paper, we present a novel surrogate model for representation learning and uncertainty quantification, which aims to deal with data of moderate to high dimensions.

Dimensionality Reduction Representation Learning +2

Topological data analysis on noisy quantum computers

no code implementations19 Sep 2022 Ismail Yunus Akhalwaya, Shashanka Ubaru, Kenneth L. Clarkson, Mark S. Squillante, Vishnu Jejjala, Yang-Hui He, Kugendran Naidoo, Vasileios Kalantzis, Lior Horesh

In this study, we present NISQ-TDA, a fully implemented end-to-end quantum machine learning algorithm needing only a short circuit-depth, that is applicable to high-dimensional classical data, and with provable asymptotic speedup for certain classes of problems.

Quantum Machine Learning Topological Data Analysis

Capacity Analysis of Vector Symbolic Architectures

no code implementations24 Jan 2023 Kenneth L. Clarkson, Shashanka Ubaru, Elizabeth Yang

The ensemble of a particular vector space and a prescribed set of vector operations (including one addition-like for "bundling" and one outer-product-like for "binding") form a *vector symbolic architecture* (VSA).

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.