Search Results for author: Rishi Sonthalia

Found 13 papers, 6 papers with code

Unsupervised Metric Learning in Presence of Missing Data

3 code implementations19 Jul 2018 Anna C. Gilbert, Rishi Sonthalia

Here, we present a new algorithm MR-MISSING that extends these previous algorithms and can be used to compute low dimensional representation on data sets with missing entries.

Dimensionality Reduction Matrix Completion +1

Project and Forget: Solving Large Scale Metric Constrained Problems

no code implementations25 Sep 2019 Anna C. Gilbert, Rishi Sonthalia

Given a set of distances amongst points, determining what metric representation is most “consistent” with the input distances or the metric that captures the relevant geometric features of the data is a key step in many machine learning algorithms.

Metric Learning

Tree! I am no Tree! I am a Low Dimensional Hyperbolic Embedding

3 code implementations NeurIPS 2020 Rishi Sonthalia, Anna C. Gilbert

Given data, finding a faithful low-dimensional hyperbolic embedding of the data is a key method by which we can extract hierarchical information or learn representative geometric features of the data.

Project and Forget: Solving Large-Scale Metric Constrained Problems

1 code implementation8 May 2020 Rishi Sonthalia, Anna C. Gilbert

Given a set of dissimilarity measurements amongst data points, determining what metric representation is most "consistent" with the input measurements or the metric that best captures the relevant geometric features of the data is a key step in many machine learning algorithms.

Clustering Metric Learning

How can classical multidimensional scaling go wrong?

1 code implementation NeurIPS 2021 Rishi Sonthalia, Gregory Van Buskirk, Benjamin Raichel, Anna C. Gilbert

While $D_l$ is not metric, when given as input to cMDS instead of $D$, it empirically results in solutions whose distance to $D$ does not increase when we increase the dimension and the classification accuracy degrades less than the cMDS solution.

Under-Parameterized Double Descent for Ridge Regularized Least Squares Denoising of Data on a Line

no code implementations24 May 2023 Rishi Sonthalia, Xinyue Li, Bochao Gu

For larger values of $\mu$, we observe that the curve for the norm of the estimator has a peak but that this no longer translates to a peak in the generalization error.

Denoising

Spectral Neural Networks: Approximation Theory and Optimization Landscape

no code implementations1 Oct 2023 Chenghui Li, Rishi Sonthalia, Nicolas Garcia Trillos

There is a large variety of machine learning methodologies that are based on the extraction of spectral geometric information from data.

Near-Interpolators: Rapid Norm Growth and the Trade-Off between Interpolation and Generalization

1 code implementation12 Mar 2024 Yutong Wang, Rishi Sonthalia, Wei Hu

Under a random matrix theoretic assumption on the data distribution and an eigendecay assumption on the data covariance matrix $\boldsymbol{\Sigma}$, we demonstrate that any near-interpolator exhibits rapid norm growth: for $\tau$ fixed, $\boldsymbol{\beta}$ has squared $\ell_2$-norm $\mathbb{E}[\|{\boldsymbol{\beta}}\|_{2}^{2}] = \Omega(n^{\alpha})$ where $n$ is the number of samples and $\alpha >1$ is the exponent of the eigendecay, i. e., $\lambda_i(\boldsymbol{\Sigma}) \sim i^{-\alpha}$.

Cannot find the paper you are looking for? You can Submit a new open access paper.