Search Results for author: Qiuyi Zhang

Found 21 papers, 6 papers with code

Beyond Worst-Case Dimensionality Reduction for Sparse Vectors

no code implementations27 Feb 2025 Sandeep Silwal, David P. Woodruff, Qiuyi Zhang

A folklore upper bound based on the birthday-paradox states: For any collection $X$ of $s$-sparse vectors in $\mathbb{R}^d$, there exists a linear map to $\mathbb{R}^{O(s^2)}$ which \emph{exactly} preserves the norm of $99\%$ of the vectors in $X$ in any $\ell_p$ norm (as opposed to the usual setting where guarantees hold for all vectors).

compressed sensing Dimensionality Reduction

Quantifying Knowledge Distillation Using Partial Information Decomposition

no code implementations12 Nov 2024 Pasan Dissanayake, Faisal Hamman, Barproda Halder, Ilia Sucholutsky, Qiuyi Zhang, Sanghamitra Dutta

We theoretically demonstrate that the task-relevant transferred knowledge is succinctly captured by the measure of redundant information about the task between the teacher and student.

Knowledge Distillation Transfer Learning

Predicting from Strings: Language Model Embeddings for Bayesian Optimization

1 code implementation14 Oct 2024 Tung Nguyen, Qiuyi Zhang, Bangding Yang, Chansoo Lee, Jorg Bornschein, Yingjie Miao, Sagi Perel, Yutian Chen, Xingyou Song

Bayesian Optimization is ubiquitous in the field of experimental design and blackbox optimization for improving search efficiency, but has been traditionally restricted to regression models which are only applicable to fixed search spaces and tabular input features.

Bayesian Optimization Experimental Design +4

The Vizier Gaussian Process Bandit Algorithm

1 code implementation21 Aug 2024 Xingyou Song, Qiuyi Zhang, Chansoo Lee, Emily Fertig, Tzu-Kuo Huang, Lior Belenki, Greg Kochanski, Setareh Ariafar, Srinivas Vasudevan, Sagi Perel, Daniel Golovin

Google Vizier has performed millions of optimizations and accelerated numerous research and production systems at Google, demonstrating the success of Bayesian optimization as a large-scale service.

Bayesian Optimization

Quantifying Spuriousness of Biased Datasets Using Partial Information Decomposition

no code implementations29 Jun 2024 Barproda Halder, Faisal Hamman, Pasan Dissanayake, Qiuyi Zhang, Ilia Sucholutsky, Sanghamitra Dutta

Spurious patterns refer to a mathematical association between two or more variables in a dataset that are not causally related.

Preference Learning Algorithms Do Not Learn Preference Rankings

no code implementations29 May 2024 Angelica Chen, Sadhika Malladi, Lily H. Zhang, Xinyi Chen, Qiuyi Zhang, Rajesh Ranganath, Kyunghyun Cho

Preference learning algorithms (e. g., RLHF and DPO) are frequently used to steer LLMs to produce generations that are more preferred by humans, but our understanding of their inner workings is still limited.

Attribute

Adaptive Regret for Bandits Made Possible: Two Queries Suffice

no code implementations17 Jan 2024 Zhou Lu, Qiuyi Zhang, Xinyi Chen, Fred Zhang, David Woodruff, Elad Hazan

In this paper, we give query and regret optimal bandit algorithms under the strict notion of strongly adaptive regret, which measures the maximum regret over any contiguous interval $I$.

Hyperparameter Optimization Multi-Armed Bandits

Optimal Scalarizations for Sublinear Hypervolume Regret

no code implementations6 Jul 2023 Qiuyi Zhang

Scalarization is a general, parallizable technique that can be deployed in any multiobjective setting to reduce multiple objectives into one, yet some have dismissed this versatile approach because linear scalarizations cannot explore concave regions of the Pareto frontier.

Bayesian Optimization

Set Learning for Accurate and Calibrated Models

1 code implementation5 Jul 2023 Lukas Muttenthaler, Robert A. Vandermeulen, Qiuyi Zhang, Thomas Unterthiner, Klaus-Robert Müller

Model overconfidence and poor calibration are common in machine learning and difficult to account for when applying standard empirical risk minimization.

Optimal Query Complexities for Dynamic Trace Estimation

no code implementations30 Sep 2022 David P. Woodruff, Fred Zhang, Qiuyi Zhang

Specifically, for any $m$ matrices $A_1,..., A_m$ with consecutive differences bounded in Schatten-$1$ norm by $\alpha$, we provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $\epsilon$ error with $\delta$ failure probability with an optimal query complexity of $\widetilde{O}\left(m \alpha\sqrt{\log(1/\delta)}/\epsilon + m\log(1/\delta)\right)$, improving the dependence on both $\alpha$ and $\delta$ from Dharangutte and Musco (NeurIPS, 2021).

Towards Learning Universal Hyperparameter Optimizers with Transformers

1 code implementation26 May 2022 Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Qiuyi Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marc'Aurelio Ranzato, Sagi Perel, Nando de Freitas

Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution.

Hyperparameter Optimization Meta-Learning

ES-ENAS: Efficient Evolutionary Optimization for Large Hybrid Search Spaces

2 code implementations19 Jan 2021 Xingyou Song, Krzysztof Choromanski, Jack Parker-Holder, Yunhao Tang, Qiuyi Zhang, Daiyi Peng, Deepali Jain, Wenbo Gao, Aldo Pacchiano, Tamas Sarlos, Yuxiang Yang

In this paper, we approach the problem of optimizing blackbox functions over large hybrid search spaces consisting of both combinatorial and continuous parameters.

Combinatorial Optimization Continuous Control +4

Joint Descent: Training and Tuning Simultaneously

no code implementations1 Jan 2021 Qiuyi Zhang

Typically in machine learning, training and tuning are done in an alternating manner: for a fixed set of hyperparameters $y$, we apply gradient descent to our objective $f(x, y)$ over trainable variables $x$ until convergence; then, we apply a tuning step over $y$ to find another promising setting of hyperparameters.

Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization

no code implementations ICML 2020 Daniel Golovin, Qiuyi Zhang

Single-objective black box optimization (also known as zeroth-order optimization) is the process of minimizing a scalar objective $f(x)$, given evaluations at adaptively chosen inputs $x$.

Bayesian Optimization Thompson Sampling

Learning the gravitational force law and other analytic functions

no code implementations15 May 2020 Atish Agarwala, Abhimanyu Das, Rina Panigrahy, Qiuyi Zhang

We present experimental evidence that the many-body gravitational force function is easier to learn with ReLU networks as compared to networks with exponential activations.

Regularized Weighted Low Rank Approximation

no code implementations NeurIPS 2019 Frank Ban, David Woodruff, Qiuyi Zhang

The classical low rank approximation problem is to find a rank $k$ matrix $UV$ (where $U$ has $k$ columns and $V$ has $k$ rows) that minimizes the Frobenius norm of $A - UV$.

Gradientless Descent: High-Dimensional Zeroth-Order Optimization

no code implementations ICLR 2020 Daniel Golovin, John Karro, Greg Kochanski, Chansoo Lee, Xingyou Song, Qiuyi Zhang

Zeroth-order optimization is the process of minimizing an objective $f(x)$, given oracle access to evaluations at adaptively chosen inputs $x$.

MuJoCo Vocal Bursts Intensity Prediction

Solving Empirical Risk Minimization in the Current Matrix Multiplication Time

no code implementations11 May 2019 Yin Tat Lee, Zhao Song, Qiuyi Zhang

Our result generalizes the very recent result of solving linear programs in the current matrix multiplication time [Cohen, Lee, Song'19] to a more broad class of problems.

Convergence Results for Neural Networks via Electrodynamics

no code implementations1 Feb 2017 Rina Panigrahy, Sushant Sachdeva, Qiuyi Zhang

Iterating, we show that gradient descent can be used to learn the entire network one node at a time.

Cannot find the paper you are looking for? You can Submit a new open access paper.