Topics in Random Matrices and Statistical Machine Learning

25 Jul 2018  ·  Sushma Kumari ·

This thesis consists of two independent parts: random matrices, which form the first one-third of this thesis, and machine learning, which constitutes the remaining part. The main results of this thesis are as follows: a necessary and sufficient condition for the inverse moments of $(m,n,\beta)$-Laguerre matrices and compound Wishart matrices to be finite; the universal weak consistency and the strong consistency of the $k$-nearest neighbor rule in metrically sigma-finite dimensional spaces and metrically finite dimensional spaces respectively. In Part I, the Chapter 1 introduces the $(m,n,\beta)$-Laguerre matrix, Wishart and compound Wishart matrix and their joint eigenvalue distribution. While in Chapter 2, a necessary and sufficient condition to have finite inverse moments has been derived. In Part II, the Chapter 1 introduces the various notions of metric dimension and differentiation property followed by our proof for the necessary part of Preiss' result. Further, Chapter 2 gives an introduction to the mathematical concepts in statistical machine learning and then the $k$-nearest neighbor rule is presented in Chapter 3 with a proof of Stone's theorem. In chapters 4 and 5, we present our main results and some possible future directions based on it.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here