Search Results for author: Mathukumalli Vidyasagar

Found 13 papers, 1 papers with code

A Tutorial Introduction to Reinforcement Learning

no code implementations3 Apr 2023 Mathukumalli Vidyasagar

In this paper, we present a brief survey of Reinforcement Learning (RL), with particular emphasis on Stochastic Approximation (SA) as a unifying theme.

Q-Learning reinforcement-learning +1

Convergence of Momentum-Based Heavy Ball Method with Batch Updating and/or Approximate Gradients

no code implementations28 Mar 2023 Tadipatri Uday Kiran Reddy, Mathukumalli Vidyasagar

In this paper, we study the well-known "Heavy Ball" method for convex and nonconvex optimization introduced by Polyak in 1964, and establish its convergence under a variety of situations.

Estimating large causal polytrees from small samples

no code implementations15 Sep 2022 Sourav Chatterjee, Mathukumalli Vidyasagar

We consider the problem of estimating a large causal polytree from a relatively small i. i. d.

SUTRA: A Novel Approach to Modelling Pandemics with Applications to COVID-19

no code implementations22 Jan 2021 Manindra Agrawal, Madhuri Kanitkar, Deepu Phillip, Tanima Hajra, Arti Singh, Avaneesh Singh, Prabal Pratap Singh, Mathukumalli Vidyasagar

The Covid-19 pandemic has two key properties: (i) asymptomatic cases (both detected and undetected) that can result in new infections, and (ii) time-varying characteristics due to new variants, Non-Pharmaceutical Interventions etc.

Deterministic Completion of Rectangular Matrices Using Asymmetric Ramanujan Graphs: Exact and Stable Recovery

no code implementations2 Aug 2019 Shantanu Prasad Burnwal, Mathukumalli Vidyasagar

By measuring a small number $m \ll n_r n_c$ of elements of $X$, is it possible to recover $X$ exactly with noise-free measurements, or to construct a good approximation of $X$ with noisy measurements?

Matrix Completion

Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions

1 code implementation9 Aug 2018 Mahsa Lotfi, Mathukumalli Vidyasagar

Next we derive universal \textit{lower} bounds on the number of measurements that any binary matrix needs to have in order to satisfy the weaker sufficient condition based on the RNSP and show that bipartite graphs of girth six are optimal.

An Approach to One-Bit Compressed Sensing Based on Probably Approximately Correct Learning Theory

no code implementations22 Oct 2017 Mehmet Eren Ahsen, Mathukumalli Vidyasagar

By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a $k$-sparse vector with $O(k \lg (n/k))$ measurements, given only the signs of the measurement vector.

2k Learning Theory +1

A Fast Noniterative Algorithm for Compressive Sensing Using Binary Measurement Matrices

no code implementations11 Aug 2017 Mahsa Lotfi, Mathukumalli Vidyasagar

In this paper we present a new algorithm for compressive sensing that makes use of binary measurement matrices and achieves exact recovery of ultra sparse vectors, in a single pass and without any iterations.

Compressive Sensing

Tight Performance Bounds for Compressed Sensing With Conventional and Group Sparsity

no code implementations19 Jun 2016 Shashank Ranjan, Mathukumalli Vidyasagar

We introduce a new concept called group robust null space property (GRNSP), and show that, under suitable conditions, a group version of the restricted isometry property (GRIP) implies the GRNSP, and thus leads to group sparse recovery.

Two New Approaches to Compressed Sensing Exhibiting Both Robust Sparse Recovery and the Grouping Effect

no code implementations30 Oct 2014 Mehmet Eren Ahsen, Niharika Challapalli, Mathukumalli Vidyasagar

It is shown here that SGL achieves robust sparse recovery, and also achieves a version of the grouping effect in that coefficients of highly correlated columns belonging to the same group of the measurement (or design) matrix are assigned roughly comparable values.

Machine Learning Methods in the Computational Biology of Cancer

no code implementations24 Feb 2014 Mathukumalli Vidyasagar

As an illustration of the possibilities, a new algorithm for sparse regression is presented, and is applied to predict the time to tumor recurrence in ovarian cancer.

BIG-bench Machine Learning Classification +3

Near-Ideal Behavior of Compressed Sensing Algorithms

no code implementations26 Jan 2014 Mehmet Eren Ahsen, Mathukumalli Vidyasagar

In a recent paper, it is shown that the LASSO algorithm exhibits "near-ideal behavior," in the following sense: Suppose $y = Az + \eta$ where $A$ satisfies the restricted isometry property (RIP) with a sufficiently small constant, and $\Vert \eta \Vert_2 \leq \epsilon$.

Cannot find the paper you are looking for? You can Submit a new open access paper.