1 code implementation • 18 Apr 2024 • Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Saad Hamid, Harald Oberhauser, Michael A. Osborne
Parallelisation in Bayesian optimisation is a common strategy but faces several challenges: the need for flexibility in acquisition functions and kernel choices, flexibility dealing with discrete and continuous variables simultaneously, model misspecification, and lastly fast massive parallelisation.
no code implementations • 23 Oct 2023 • Satoshi Hayakawa, Tetsuro Morimura
Reward evaluation of episodes becomes a bottleneck in a broad range of reinforcement learning tasks.
1 code implementation • 9 Jun 2023 • Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Xingchen Wan, Vu Nguyen, Harald Oberhauser, Michael A. Osborne
Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation.
no code implementations • 27 Jan 2023 • Hayata Yamasaki, Sathyawageeswar Subramanian, Satoshi Hayakawa, Sho Sonoda
To address this problem, we develop a quantum ridgelet transform (QRT), which implements the ridgelet transform of a quantum state within a linear runtime $O(D)$ of quantum computation.
1 code implementation • 27 Jan 2023 • Masaki Adachi, Satoshi Hayakawa, Saad Hamid, Martin Jørgensen, Harald Oberhauser, Micheal A. Osborne
Batch Bayesian optimisation and Bayesian quadrature have been shown to be sample-efficient methods of performing optimisation and quadrature where expensive-to-evaluate objective functions can be queried in parallel.
1 code implementation • 23 Jan 2023 • Satoshi Hayakawa, Harald Oberhauser, Terry Lyons
We analyze the Nystr\"om approximation of a positive definite kernel associated with a probability measure.
2 code implementations • 9 Jun 2022 • Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Harald Oberhauser, Michael A. Osborne
Empirically, we find that our approach significantly outperforms the sampling efficiency of both state-of-the-art BQ techniques and Nested Sampling in various real-world datasets, including lithium-ion battery analytics.
1 code implementation • 20 Jul 2021 • Satoshi Hayakawa, Harald Oberhauser, Terry Lyons
We study kernel quadrature rules with convex weights.
no code implementations • 22 May 2019 • Satoshi Hayakawa, Taiji Suzuki
Whereas existing theoretical studies of deep learning have been based mainly on mathematical theories of well-known function classes such as H\"{o}lder and Besov classes, we focus on function classes with discontinuity and sparsity, which are those naturally assumed in practice.