Search Results for author: Haoyu Zhao

Found 14 papers, 4 papers with code

Combinatorial Pure Exploration for Dueling Bandit

no code implementations ICML 2020 Wei Chen, Yihan Du, Longbo Huang, Haoyu Zhao

For Borda winner, we establish a reduction of the problem to the original CPE-MAB setting and design PAC and exact algorithms that achieve both the sample complexity similar to that in the CPE-MAB setting (which is nearly optimal for a subclass of problems) and polynomial running time per round.

Ref-NeuS: Ambiguity-Reduced Neural Implicit Surface Learning for Multi-View Reconstruction with Reflection

no code implementations20 Mar 2023 Wenhang Ge, Tao Hu, Haoyu Zhao, Shu Liu, Ying-Cong Chen

We show that together with a reflection direction-dependent radiance, our model achieves high-quality surface reconstruction on reflective surfaces and outperforms the state-of-the-arts by a large margin.

3D Reconstruction Surface Reconstruction

Do Transformers Parse while Predicting the Masked Word?

no code implementations14 Mar 2023 Haoyu Zhao, Abhishek Panigrahi, Rong Ge, Sanjeev Arora

We also show that the Inside-Outside algorithm is optimal for masked language modeling loss on the PCFG-generated data.

Constituency Parsing Language Modelling +1

Task-Specific Skill Localization in Fine-tuned Language Models

1 code implementation13 Feb 2023 Abhishek Panigrahi, Nikunj Saunshi, Haoyu Zhao, Sanjeev Arora

Given the downstream task and a model fine-tuned on that task, a simple optimization is used to identify a very small subset of parameters ($\sim0. 01$% of model parameters) responsible for ($>95$%) of the model's performance, in the sense that grafting the fine-tuned values for just this tiny subset onto the pre-trained model gives performance almost as well as the fine-tuned model.

Continual Learning

Coresets for Vertical Federated Learning: Regularized Linear Regression and $K$-Means Clustering

1 code implementation26 Oct 2022 Lingxiao Huang, Zhize Li, Jialin Sun, Haoyu Zhao

Vertical federated learning (VFL), where data features are stored in multiple parties distributively, is an important area in machine learning.

Federated Learning regression

SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression

1 code implementation20 Jun 2022 Zhize Li, Haoyu Zhao, Boyue Li, Yuejie Chi

We then propose a unified framework SoteriaFL for private federated learning, which accommodates a general family of local gradient estimators including popular stochastic variance-reduced gradient methods and the state-of-the-art shifted compression scheme.

Federated Learning Privacy Preserving

BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression

1 code implementation31 Jan 2022 Haoyu Zhao, Boyue Li, Zhize Li, Peter Richtárik, Yuejie Chi

Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized machine learning applications in multi-agent or federated environments.

Faster Rates for Compressed Federated Learning with Client-Variance Reduction

no code implementations24 Dec 2021 Haoyu Zhao, Konstantin Burlachenko, Zhize Li, Peter Richtárik

In the convex setting, COFIG converges within $O(\frac{(1+\omega)\sqrt{N}}{S\epsilon})$ communication rounds, which is also the first convergence result for compression schemes that do not communicate with all the clients in each round.

Federated Learning

FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning

no code implementations10 Aug 2021 Haoyu Zhao, Zhize Li, Peter Richtárik

We propose a new federated learning algorithm, FedPAGE, able to further reduce the communication complexity by utilizing the recent optimal PAGE method (Li et al., 2021) instead of plain SGD in FedAvg.

Federated Learning

Combinatorial Pure Exploration of Dueling Bandit

no code implementations23 Jun 2020 Wei Chen, Yihan Du, Longbo Huang, Haoyu Zhao

For Borda winner, we establish a reduction of the problem to the original CPE-MAB setting and design PAC and exact algorithms that achieve both the sample complexity similar to that in the CPE-MAB setting (which is nearly optimal for a subclass of problems) and polynomial running time per round.

Combinatorial Semi-Bandit in the Non-Stationary Environment

no code implementations10 Feb 2020 Wei Chen, Li-Wei Wang, Haoyu Zhao, Kai Zheng

In a special case where the reward function is linear and we have an exact oracle, we design a parameter-free algorithm that achieves nearly optimal regret both in the switching case and in the dynamic case without knowing the parameters in advance.

Online Second Price Auction with Semi-bandit Feedback Under the Non-Stationary Setting

no code implementations14 Nov 2019 Haoyu Zhao, Wei Chen

The problem is more challenging than the standard online learning scenario since the private value distribution is non-stationary, meaning that the distribution of bidders' private values may change over time, and we need to use the \emph{non-stationary regret} to measure the performance of our algorithm.

Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently

no code implementations26 Sep 2019 Rong Ge, Runzhe Wang, Haoyu Zhao

It has been observed \citep{zhang2016understanding} that deep neural networks can memorize: they achieve 100\% accuracy on training data.

Stochastic One-Sided Full-Information Bandit

no code implementations20 Jun 2019 Haoyu Zhao, Wei Chen

In this paper, we study the stochastic version of the one-sided full information bandit problem, where we have $K$ arms $[K] = \{1, 2, \ldots, K\}$, and playing arm $i$ would gain reward from an unknown distribution for arm $i$ while obtaining reward feedback for all arms $j \ge i$.

Cannot find the paper you are looking for? You can Submit a new open access paper.