no code implementations • 18 Nov 2016 • Quang Minh Hoang, Trong Nghia Hoang, Kian Hsiang Low
While much research effort has been dedicated to scaling up sparse Gaussian process (GP) models based on inducing variables for big data, little attention is afforded to the other less explored class of low-rank GP approximations that exploit the sparse spectral representation of a GP kernel.
no code implementations • 19 Nov 2017 • Trong Nghia Hoang, Quang Minh Hoang, Ruofei Ouyang, Kian Hsiang Low
This paper presents a novel decentralized high-dimensional Bayesian optimization (DEC-HBO) algorithm that, in contrast to existing HBO algorithms, can exploit the interdependent effects of various input components on the output of the unknown objective function f for boosting the BO performance and still preserve scalability in the number of input dimensions without requiring prior knowledge or the existence of a low (effective) dimension of the input space.
no code implementations • 23 May 2018 • Trong Nghia Hoang, Quang Minh Hoang, Kian Hsiang Low, Jonathan How
Distributed machine learning (ML) is a modern computation paradigm that divides its workload into independent tasks that can be simultaneously achieved by multiple machines (i. e., agents) for better scalability.
1 code implementation • NeurIPS 2020 • Quang Minh Hoang, Trong Nghia Hoang, Hai Pham, David P. Woodruff
We introduce a new scalable approximation for Gaussian processes with provable guarantees which hold simultaneously over its entire parameter space.