1 code implementation • 6 Nov 2024 • Ryan Campbell, Nelson Lojo, Kesava Viswanadha, Christoffer Grondal Tryggestad, Derrick Han Sun, Sriteja Vijapurapu, August Rolfsen, Anant Sahai
In-Context Learning (ICL) is a phenomenon where task learning occurs through a prompt sequence without the necessity of parameter updates.
no code implementations • 6 Oct 2024 • David X. Wu, Anant Sahai
We theoretically investigate weak-to-strong generalization for binary and multilabel classification in a stylized overparameterized spiked covariance model with Gaussian covariates where the weak teacher's pseudolabels are asymptotically like random guessing.
1 code implementation • 27 Jul 2024 • Max Wilcoxson, Morten Svendgård, Ria Doshi, Dylan Davis, Reya Vir, Anant Sahai
Simple function classes have emerged as toy problems to better understand in-context-learning in transformer-based architectures used for large language models.
no code implementations • NeurIPS 2023 • David X. Wu, Anant Sahai
We study the asymptotic generalization of an overparameterized linear model for multiclass classification under the Gaussian covariates bi-level model introduced in Subramanian et al.~'22, where the number of data points, features, and classes all grow together.
no code implementations • 3 Jun 2022 • Vignesh Subramanian, Rahul Arya, Anant Sahai
Via an overparameterized linear model with Gaussian features, we provide conditions for good generalization for multiclass classification of minimum-norm interpolating solutions in an asymptotic setting where both the number of underlying features and the number of classes scale with the number of training points.
no code implementations • 27 Sep 2021 • Adhyyan Narang, Vidya Muthukumar, Anant Sahai
We find that the learned model is susceptible to adversaries in an intermediate regime where classification generalizes but regression does not.
no code implementations • 3 Dec 2020 • Vidya Muthukumar, Soham Phade, Anant Sahai
We study the limiting behavior of the mixed strategies that result from optimal no-regret learning strategies in a repeated game setting where the stage game is any 2 by 2 competitive game.
no code implementations • 16 May 2020 • Vidya Muthukumar, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel Hsu, Anant Sahai
We compare classification and regression tasks in an overparameterized linear model with Gaussian features.
no code implementations • 21 Oct 2019 • Anant Sahai, Joshua Sanz, Vignesh Subramanian, Caryn Tran, Kailas Vodrahalli
We investigate whether learning is possible under different levels of information sharing between distributed agents which are not necessarily co-designed.
1 code implementation • 19 Apr 2019 • Jinxiang Song, Bile Peng, Christian Häger, Henk Wymeersch, Anant Sahai
A novel quantization method is proposed, which exploits the specific properties of the feedback signal and is suitable for non-stationary signal distributions.
no code implementations • 21 Mar 2019 • Vidya Muthukumar, Kailas Vodrahalli, Vignesh Subramanian, Anant Sahai
A continuing mystery in understanding the empirical success of deep neural networks is their ability to achieve zero training error and generalize well, even when the training data is noisy and there are more parameters than data points.
no code implementations • 22 May 2018 • Vidya Muthukumar, Mitas Ray, Anant Sahai, Peter L. Bartlett
We introduce algorithms for online, full-information prediction that are competitive with contextual tree experts of unknown complexity, in both probabilistic and adversarial settings.
no code implementations • 14 Jan 2018 • Colin de Vrieze, Shane Barratt, Daniel Tsai, Anant Sahai
Traditional radio systems are strictly co-designed on the lower levels of the OSI stack for compatibility and efficiency.
Multi-agent Reinforcement Learning
reinforcement-learning
+2