Search Results for author: Fred Zhang

Found 11 papers, 1 papers with code

Approaching Human-Level Forecasting with Language Models

no code implementations28 Feb 2024 Danny Halawi, Fred Zhang, Chen Yueh-Han, Jacob Steinhardt

In this work, we study whether language models (LMs) can forecast at the level of competitive human forecasters.

Decision Making Retrieval

Adaptive Regret for Bandits Made Possible: Two Queries Suffice

no code implementations17 Jan 2024 Zhou Lu, Qiuyi Zhang, Xinyi Chen, Fred Zhang, David Woodruff, Elad Hazan

In this paper, we give query and regret optimal bandit algorithms under the strict notion of strongly adaptive regret, which measures the maximum regret over any contiguous interval $I$.

Hyperparameter Optimization Multi-Armed Bandits

Towards Best Practices of Activation Patching in Language Models: Metrics and Methods

no code implementations27 Sep 2023 Fred Zhang, Neel Nanda

Mechanistic interpretability seeks to understand the internal mechanisms of machine learning models, where localization -- identifying the important model components -- is a key step.

Streaming Algorithms for Learning with Experts: Deterministic Versus Robust

no code implementations3 Mar 2023 David P. Woodruff, Fred Zhang, Samson Zhou

In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of $T$ days (or times), given a set of $n$ experts who make predictions on each day (or time).

Privately Estimating a Gaussian: Efficient, Robust and Optimal

no code implementations15 Dec 2022 Daniel Alabi, Pravesh K. Kothari, Pranay Tankala, Prayaag Venkat, Fred Zhang

We prove a new lower bound on differentially private covariance estimation to show that the dependence on the condition number $\kappa$ in the above sample bound is also tight.

Optimal Query Complexities for Dynamic Trace Estimation

no code implementations30 Sep 2022 David P. Woodruff, Fred Zhang, Qiuyi Zhang

Specifically, for any $m$ matrices $A_1,..., A_m$ with consecutive differences bounded in Schatten-$1$ norm by $\alpha$, we provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $\epsilon$ error with $\delta$ failure probability with an optimal query complexity of $\widetilde{O}\left(m \alpha\sqrt{\log(1/\delta)}/\epsilon + m\log(1/\delta)\right)$, improving the dependence on both $\alpha$ and $\delta$ from Dharangutte and Musco (NeurIPS, 2021).

Online Prediction in Sub-linear Space

no code implementations16 Jul 2022 Binghui Peng, Fred Zhang

We provide the first sub-linear space and sub-linear regret algorithm for online learning with expert advice (against an oblivious adversary), addressing an open question raised recently by Srinivas, Woodruff, Xu and Zhou (STOC 2022).

Open-Ended Question Answering

Robust and Heavy-Tailed Mean Estimation Made Simple, via Regret Minimization

no code implementations NeurIPS 2020 Samuel B. Hopkins, Jerry Li, Fred Zhang

In this paper, we provide a meta-problem and a duality theorem that lead to a new unified view on robust and heavy-tailed mean estimation in high dimensions.

A Fast Spectral Algorithm for Mean Estimation with Sub-Gaussian Rates

no code implementations13 Aug 2019 Zhixian Lei, Kyle Luh, Prayaag Venkat, Fred Zhang

The goal is to design an efficient estimator that attains the optimal sub-gaussian error bound, only assuming that the random vector has bounded mean and covariance.

Computational Efficiency LEMMA

SGD on Neural Networks Learns Functions of Increasing Complexity

1 code implementation NeurIPS 2019 Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang, Boaz Barak

We perform an experimental study of the dynamics of Stochastic Gradient Descent (SGD) in learning deep neural networks for several real and synthetic classification tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.