Search Results for author: Kenneth L. Clarkson

Found 11 papers, 1 papers with code

Quantum Topological Data Analysis with Linear Depth and Exponential Speedup

no code implementations5 Aug 2021 Shashanka Ubaru, Ismail Yunus Akhalwaya, Mark S. Squillante, Kenneth L. Clarkson, Lior Horesh

In this paper, we completely overhaul the QTDA algorithm to achieve an improved exponential speedup and depth complexity of $O(n\log(1/(\delta\epsilon)))$.

Quantum Machine Learning Topological Data Analysis

Near-Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time

no code implementations16 Jul 2021 Nadiia Chepurko, Kenneth L. Clarkson, Praneeth Kacham, David P. Woodruff

Currently, in the numerical linear algebra community, it is thought that to obtain nearly-optimal bounds for various problems such as rank computation, finding a maximal linearly independent subset of columns, regression, low rank approximation, maximum matching on general graphs and linear matroid union, one would need to resolve the main open question of Nelson and Nguyen (FOCS, 2013) regarding the logarithmic factors in the sketching dimension for existing constant factor approximation oblivious subspace embeddings.

Order Embeddings from Merged Ontologies using Sketching

no code implementations6 Jan 2021 Kenneth L. Clarkson, Sanjana Sahayaraj

We give a simple, low resource method to produce order embeddings from ontologies.

Dimensionality Reduction

Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra

no code implementations9 Nov 2020 Nadiia Chepurko, Kenneth L. Clarkson, Lior Horesh, David P. Woodruff

We create classical (non-quantum) dynamic data structures supporting queries for recommender systems and least-squares regression that are comparable to their quantum analogues.

Recommendation Systems

Projection techniques to update the truncated SVD of evolving matrices

no code implementations13 Oct 2020 Vassilis Kalantzis, Georgios Kollias, Shashanka Ubaru, Athanasios N. Nikolakopoulos, Lior Horesh, Kenneth L. Clarkson

This paper considers the problem of updating the rank-k truncated Singular Value Decomposition (SVD) of matrices subject to the addition of new rows and/or columns over time.

Recommendation Systems

Dimensionality Reduction for Tukey Regression

no code implementations14 May 2019 Kenneth L. Clarkson, Ruosong Wang, David P. Woodruff

We give the first dimensionality reduction methods for the overconstrained Tukey regression problem.

Dimensionality Reduction

Minimax experimental design: Bridging the gap between statistical and worst-case approaches to least squares regression

no code implementations4 Feb 2019 Michał Dereziński, Kenneth L. Clarkson, Michael W. Mahoney, Manfred K. Warmuth

In the process, we develop a new algorithm for a joint sampling distribution called volume sampling, and we propose a new i. i. d.

Sharper Bounds for Regularized Data Fitting

no code implementations10 Nov 2016 Haim Avron, Kenneth L. Clarkson, David P. Woodruff

We study regularization both in a fairly broad setting, and in the specific context of the popular and widely used technique of ridge regularization; for the latter, as applied to each of these problems, we show algorithmic resource bounds in which the {\em statistical dimension} appears in places where in previous bounds the rank would appear.

Faster Kernel Ridge Regression Using Sketching and Preconditioning

no code implementations10 Nov 2016 Haim Avron, Kenneth L. Clarkson, David P. Woodruff

The preconditioner is based on random feature maps, such as random Fourier features, which have recently emerged as a powerful technique for speeding up and scaling the training of kernel-based methods, such as kernel ridge regression, by resorting to approximations.

Low Rank Approximation and Regression in Input Sparsity Time

1 code implementation26 Jul 2012 Kenneth L. Clarkson, David P. Woodruff

We design a new distribution over $\poly(r \eps^{-1}) \times n$ matrices $S$ so that for any fixed $n \times d$ matrix $A$ of rank $r$, with probability at least 9/10, $\norm{SAx}_2 = (1 \pm \eps)\norm{Ax}_2$ simultaneously for all $x \in \mathbb{R}^d$.

Data Structures and Algorithms

The Fast Cauchy Transform and Faster Robust Linear Regression

no code implementations19 Jul 2012 Kenneth L. Clarkson, Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, Xiangrui Meng, David P. Woodruff

We provide fast algorithms for overconstrained $\ell_p$ regression and related problems: for an $n\times d$ input matrix $A$ and vector $b\in\mathbb{R}^n$, in $O(nd\log n)$ time we reduce the problem $\min_{x\in\mathbb{R}^d} \|Ax-b\|_p$ to the same problem with input matrix $\tilde A$ of dimension $s \times d$ and corresponding $\tilde b$ of dimension $s\times 1$.

Cannot find the paper you are looking for? You can Submit a new open access paper.