Search Results for author: Simon Omlor

Found 6 papers, 3 papers with code

Turnstile $\ell_p$ leverage score sampling with applications

no code implementations1 Jun 2024 Alexander Munteanu, Simon Omlor

When combined with preconditioning techniques, our algorithm extends to $\ell_p$ leverage score sampling over turnstile data streams.

Optimal bounds for $\ell_p$ sensitivity sampling via $\ell_2$ augmentation

no code implementations1 Jun 2024 Alexander Munteanu, Simon Omlor

As an application of our main result, we also obtain an $\tilde O(\varepsilon^{-2}\mu d)$ sensitivity sampling bound for logistic regression, where $\mu$ is a natural complexity measure for this problem.

Almost Linear Constant-Factor Sketching for $\ell_1$ and Logistic Regression

1 code implementation31 Mar 2023 Alexander Munteanu, Simon Omlor, David Woodruff

We improve upon previous oblivious sketching and turnstile streaming results for $\ell_1$ and logistic regression, giving a much smaller sketching dimension achieving $O(1)$-approximation and yielding an efficient optimization problem in the sketch space.

regression

Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis

no code implementations26 Jun 2022 Alexander Munteanu, Simon Omlor, Zhao Song, David P. Woodruff

A common method in training neural networks is to initialize all the weights to be independent Gaussian vectors.

$p$-Generalized Probit Regression and Scalable Maximum Likelihood Estimation via Sketching and Coresets

1 code implementation25 Mar 2022 Alexander Munteanu, Simon Omlor, Christian Peters

We study the $p$-generalized probit regression model, which is a generalized linear model for binary responses.

regression

Oblivious sketching for logistic regression

1 code implementation14 Jul 2021 Alexander Munteanu, Simon Omlor, David Woodruff

Our sketch can be computed in input sparsity time over a turnstile data stream and reduces the size of a $d$-dimensional data set from $n$ to only $\operatorname{poly}(\mu d\log n)$ weighted points, where $\mu$ is a useful parameter which captures the complexity of compressing the data.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.