1 code implementation • 17 Feb 2022 • Tudor Manole, Nhat Ho
These new loss functions accurately capture the heterogeneity in convergence rates of fitted mixture components, and we use them to sharpen existing pointwise and uniform convergence rates in various classes of mixture models.
1 code implementation • 26 Jul 2021 • Tudor Manole, Sivaraman Balakrishnan, Jonathan Niles-Weed, Larry Wasserman
Our work also provides new bounds on the risk of corresponding plugin estimators for the quadratic Wasserstein distance, and we show how this problem relates to that of estimating optimal transport maps using stability arguments for smooth and strongly convex Brenier potentials.
1 code implementation • 16 Mar 2021 • Tudor Manole, Aaditya Ramdas
We present a unified technique for sequential estimation of convex divergences between distributions, including integral probability metrics like the kernel maximum mean discrepancy, $\varphi$-divergences like the Kullback-Leibler divergence, and optimal transport costs, such as powers of Wasserstein distances.
1 code implementation • 1 Jun 2020 • Tudor Manole, Nhat Ho
We derive uniform convergence rates for the maximum likelihood estimator and minimax lower bounds for parameter estimation in two-component location-scale Gaussian mixture models with unequal variances.
1 code implementation • 24 May 2020 • Tudor Manole, Abbas Khalili
Estimation of the number of components (or order) of a finite mixture model is a long standing and challenging problem in statistics.
2 code implementations • 17 Sep 2019 • Tudor Manole, Sivaraman Balakrishnan, Larry Wasserman
To motivate the choice of these classes, we also study minimax rates of estimating a distribution under the Sliced Wasserstein distance.