no code implementations • 3 Apr 2023 • Mathukumalli Vidyasagar
In this paper, we present a brief survey of Reinforcement Learning (RL), with particular emphasis on Stochastic Approximation (SA) as a unifying theme.
no code implementations • 28 Mar 2023 • Tadipatri Uday Kiran Reddy, Mathukumalli Vidyasagar
In this paper, we study the well-known "Heavy Ball" method for convex and nonconvex optimization introduced by Polyak in 1964, and establish its convergence under a variety of situations.
no code implementations • 15 Sep 2022 • Sourav Chatterjee, Mathukumalli Vidyasagar
We consider the problem of estimating a large causal polytree from a relatively small i. i. d.
no code implementations • 22 Jan 2021 • Manindra Agrawal, Madhuri Kanitkar, Deepu Phillip, Tanima Hajra, Arti Singh, Avaneesh Singh, Prabal Pratap Singh, Mathukumalli Vidyasagar
The Covid-19 pandemic has two key properties: (i) asymptomatic cases (both detected and undetected) that can result in new infections, and (ii) time-varying characteristics due to new variants, Non-Pharmaceutical Interventions etc.
no code implementations • 8 Oct 2019 • Shantanu Prasad Burnwal, Kaneenika Sinha, Mathukumalli Vidyasagar
The objectives of this article are three-fold.
no code implementations • 2 Aug 2019 • Shantanu Prasad Burnwal, Mathukumalli Vidyasagar
By measuring a small number $m \ll n_r n_c$ of elements of $X$, is it possible to recover $X$ exactly with noise-free measurements, or to construct a good approximation of $X$ with noisy measurements?
1 code implementation • 9 Aug 2018 • Mahsa Lotfi, Mathukumalli Vidyasagar
Next we derive universal \textit{lower} bounds on the number of measurements that any binary matrix needs to have in order to satisfy the weaker sufficient condition based on the RNSP and show that bipartite graphs of girth six are optimal.
no code implementations • 22 Oct 2017 • Mehmet Eren Ahsen, Mathukumalli Vidyasagar
By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a $k$-sparse vector with $O(k \lg (n/k))$ measurements, given only the signs of the measurement vector.
no code implementations • 11 Aug 2017 • Mahsa Lotfi, Mathukumalli Vidyasagar
In this paper we present a new algorithm for compressive sensing that makes use of binary measurement matrices and achieves exact recovery of ultra sparse vectors, in a single pass and without any iterations.
no code implementations • 19 Jun 2016 • Shashank Ranjan, Mathukumalli Vidyasagar
We introduce a new concept called group robust null space property (GRNSP), and show that, under suitable conditions, a group version of the restricted isometry property (GRIP) implies the GRNSP, and thus leads to group sparse recovery.
no code implementations • 30 Oct 2014 • Mehmet Eren Ahsen, Niharika Challapalli, Mathukumalli Vidyasagar
It is shown here that SGL achieves robust sparse recovery, and also achieves a version of the grouping effect in that coefficients of highly correlated columns belonging to the same group of the measurement (or design) matrix are assigned roughly comparable values.
no code implementations • 24 Feb 2014 • Mathukumalli Vidyasagar
As an illustration of the possibilities, a new algorithm for sparse regression is presented, and is applied to predict the time to tumor recurrence in ovarian cancer.
no code implementations • 26 Jan 2014 • Mehmet Eren Ahsen, Mathukumalli Vidyasagar
In a recent paper, it is shown that the LASSO algorithm exhibits "near-ideal behavior," in the following sense: Suppose $y = Az + \eta$ where $A$ satisfies the restricted isometry property (RIP) with a sufficiently small constant, and $\Vert \eta \Vert_2 \leq \epsilon$.