no code implementations • 20 Oct 2023 • Rasul Tutunov, Antoine Grosnit, Juliusz Ziomek, Jun Wang, Haitham Bou-Ammar
This paper delves into the capabilities of large language models (LLMs), specifically focusing on advancing the theoretical comprehension of chain-of-thought prompting.
no code implementations • 27 May 2022 • Alexandre Maraval, Matthieu Zimmer, Antoine Grosnit, Rasul Tutunov, Jun Wang, Haitham Bou Ammar
First, we notice that these models are trained on uniformly distributed inputs, which impairs predictive accuracy on non-uniform data - a setting arising from any typical BO loop due to exploration-exploitation trade-offs.
no code implementations • 3 Feb 2022 • Xihan Li, Xiang Chen, Rasul Tutunov, Haitham Bou-Ammar, Lei Wang, Jun Wang
The Schr\"odinger equation is at the heart of modern quantum mechanics.
1 code implementation • 29 Jan 2022 • Asif Khan, Alexander I. Cowen-Rivers, Antoine Grosnit, Derrick-Goh-Xin Deik, Philippe A. Robert, Victor Greiff, Eva Smorodina, Puneet Rawat, Kamil Dreczkowski, Rahmad Akbar, Rasul Tutunov, Dany Bou-Ammar, Jun Wang, Amos Storkey, Haitham Bou-Ammar
software suite as a black-box oracle to score the target specificity and affinity of designed antibodies \textit{in silico} in an unconstrained fashion~\citep{robert2021one}.
no code implementations • 11 Nov 2021 • Antoine Grosnit, Cedric Malherbe, Rasul Tutunov, Xingchen Wan, Jun Wang, Haitham Bou Ammar
Optimising the quality-of-results (QoR) of circuits during logic synthesis is a formidable challenge necessitating the exploration of exponentially sized search spaces.
2 code implementations • 7 Jun 2021 • Antoine Grosnit, Rasul Tutunov, Alexandre Max Maraval, Ryan-Rhys Griffiths, Alexander I. Cowen-Rivers, Lin Yang, Lin Zhu, Wenlong Lyu, Zhitang Chen, Jun Wang, Jan Peters, Haitham Bou-Ammar
We introduce a method combining variational autoencoders (VAEs) and deep metric learning to perform Bayesian optimisation (BO) over high-dimensional and structured input spaces.
Ranked #1 on Molecular Graph Generation on ZINC
no code implementations • 15 Jan 2021 • Vincent Moens, Hang Ren, Alexandre Maraval, Rasul Tutunov, Jun Wang, Haitham Ammar
In this paper, we propose CI-VI an efficient and scalable solver for semi-implicit variational inference (SIVI).
1 code implementation • 15 Dec 2020 • Antoine Grosnit, Alexander I. Cowen-Rivers, Rasul Tutunov, Ryan-Rhys Griffiths, Jun Wang, Haitham Bou-Ammar
Bayesian optimisation presents a sample-efficient methodology for global optimisation.
3 code implementations • 7 Dec 2020 • Alexander I. Cowen-Rivers, Wenlong Lyu, Rasul Tutunov, Zhi Wang, Antoine Grosnit, Ryan Rhys Griffiths, Alexandre Max Maraval, Hao Jianye, Jun Wang, Jan Peters, Haitham Bou Ammar
Our results on the Bayesmark benchmark indicate that heteroscedasticity and non-stationarity pose significant challenges for black-box optimisers.
Ranked #1 on Hyperparameter Optimization on Bayesmark
no code implementations • 10 Feb 2020 • Rasul Tutunov, Minne Li, Alexander I. Cowen-Rivers, Jun Wang, Haitham Bou-Ammar
In this paper, we present C-ADAM, the first adaptive solver for compositional problems involving a non-linear functional nesting of expected values.
no code implementations • 9 Oct 2019 • Victor Gabillon, Rasul Tutunov, Michal Valko, Haitham Bou Ammar
In this paper, we formalise order-robust optimisation as an instance of online learning minimising simple regret, and propose Vroom, a zero'th order optimisation algorithm capable of achieving vanishing regret in non-stationary environments, while recovering favorable rates under stochastic reward-generating processes.
no code implementations • 25 Sep 2019 • Yaodong Yang, Rasul Tutunov, Phu Sakulwongtana, Haitham Bou Ammar
Furthermore, we also show successful results on large joint strategy profiles with a maximum size in the order of $\mathcal{O}(2^{25})$ ($\approx 33$ million joint strategies) -- a setting not evaluable using $\alpha$-Rank with reasonable computational budget.
no code implementations • 11 May 2019 • Dong Li, Qichao Zhang, Dongbin Zhao, Yuzheng Zhuang, Bin Wang, Wulong Liu, Rasul Tutunov, Jun Wang
To address the long-term memory issue, this paper proposes a graph attention memory (GAM) architecture consisting of memory construction module, graph attention module and control module.
no code implementations • NeurIPS 2018 • Rasul Tutunov, Dongho Kim, Haitham Bou Ammar
Multitask reinforcement learning (MTRL) suffers from scalability issues when the number of tasks or trajectories grows large.
no code implementations • 21 May 2015 • Haitham Bou Ammar, Rasul Tutunov, Eric Eaton
Lifelong reinforcement learning provides a promising framework for developing versatile agents that can accumulate knowledge over a lifetime of experience and rapidly learn new tasks by building upon prior knowledge.