no code implementations • 25 Sep 2023 • Zihao Hu, Guanghui Wang, Xi Wang, Andre Wibisono, Jacob Abernethy, Molei Tao
In the context of Euclidean space, it is established that the last-iterates of both the extragradient (EG) and past extragradient (PEG) methods converge to the solution of monotone variational inequality problems at a rate of $O\left(\frac{1}{\sqrt{T}}\right)$ (Cai et al., 2022).
no code implementations • 20 Jun 2023 • Yeojoon Youn, Zihao Hu, Juba Ziani, Jacob Abernethy
To the best of our knowledge, this is the first study that solely relies on randomized quantization without incorporating explicit discrete noise to achieve Renyi DP guarantees in Federated Learning systems.
no code implementations • 30 May 2023 • Zihao Hu, Guanghui Wang, Jacob Abernethy
The projection operation is a critical component in a wide range of optimization algorithms, such as online gradient descent (OGD), for enforcing constraints and achieving optimal regret bounds.
no code implementations • NeurIPS 2023 • Guanghui Wang, Zihao Hu, Claudio Gentile, Vidya Muthukumar, Jacob Abernethy
To address this limitation, we present a series of state-of-the-art implicit bias rates for mirror descent and steepest descent algorithms.
no code implementations • 17 Feb 2023 • Zihao Hu, Guanghui Wang, Jacob Abernethy
In this paper, we consider the sequential decision problem where the goal is to minimize the general dynamic regret on a complete Riemannian manifold.
no code implementations • 17 Oct 2022 • Guanghui Wang, Zihao Hu, Vidya Muthukumar, Jacob Abernethy
The classical algorithms for online learning and decision-making have the benefit of achieving the optimal performance guarantees, but suffer from computational complexity limitations when implemented at scale.
no code implementations • 2 Dec 2017 • Zihao Hu, Xiyi Luo, Hongtao Lu, Yong Yu
Recently, supervised hashing methods have attracted much attention since they can optimize retrieval speed and storage cost while preserving semantic information.
no code implementations • CVPR 2017 • Zihao Hu, Junxuan Chen, Hongtao Lu, Tongzhen Zhang
To address this problem, we present a novel fully Bayesian treatment for supervised hashing problem, named Bayesian Supervised Hashing (BSH), in which hyperparameters are automatically tuned during optimization.