Search Results for author: Arvind Mahankali

Found 2 papers, 0 papers with code

One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention

no code implementations7 Jul 2023 Arvind Mahankali, Tatsunori B. Hashimoto, Tengyu Ma

Then, we find that changing the distribution of the covariates and weight vector to a non-isotropic Gaussian distribution has a strong impact on the learned algorithm: the global minimizer of the pre-training loss now implements a single step of $\textit{pre-conditioned}$ GD.

In-Context Learning regression

Linear and Kernel Classification in the Streaming Model: Improved Bounds for Heavy Hitters

no code implementations NeurIPS 2021 Arvind Mahankali, David Woodruff

For linear classification, we improve upon the algorithm of (Tai, et al. 2018), which solves the $\ell_1$ point query problem on the optimal weight vector $w_* \in \mathbb{R}^d$ in sublinear space.

Cannot find the paper you are looking for? You can Submit a new open access paper.