no code implementations • 2 Oct 2024 • Alihan Hüyük, Xinnuo Xu, Jacqueline Maasch, Aditya V. Nori, Javier González
Second, we propose several fine-tuning approaches that aim to elicit better reasoning mechanisms, in the sense of the proposed metrics.
no code implementations • 9 Sep 2024 • Melanie F. Pradier, Javier González
First, we learn a low-dimensional, latent Riemannian manifold that accounts for uncertainty and geometry of the original input data.
no code implementations • 15 Aug 2024 • Javier González, Aditya V. Nori
Recent advances in AI have been significantly driven by the capabilities of large language models (LLMs) to solve complex problems in ways that resemble human thinking.
no code implementations • 6 Nov 2023 • Javier González, Aditya V. Nori
Large language models (LLMs) are powerful AI tools that can generate and comprehend natural language text and other complex information.
no code implementations • 2 Nov 2023 • Javier González, Cliff Wong, Zelalem Gero, Jass Bagga, Risa Ueno, Isabel Chien, Eduard Oravkin, Emre Kiciman, Aditya Nori, Roshanthi Weerasinghe, Rom S. Leidner, Brian Piening, Tristan Naumann, Carlo Bifulco, Hoifung Poon
The rapid digitization of real-world data offers an unprecedented opportunity for optimizing healthcare delivery and accelerating biomedical discovery.
1 code implementation • NeurIPS 2021 • Virginia Aglietti, Neil Dhir, Javier González, Theodoros Damoulas
This paper studies the problem of performing a sequence of optimal interventions in a causal dynamical system where both the target variable of interest and the inputs evolve over time.
no code implementations • NeurIPS 2021 • Siu Lun Chau, Jean-François Ton, Javier González, Yee Whye Teh, Dino Sejdinovic
While causal models are becoming one of the mainstays of machine learning, the problem of uncertainty quantification in causal inference remains challenging.
1 code implementation • NeurIPS 2020 • Virginia Aglietti, Theodoros Damoulas, Mauricio Álvarez, Javier González
This paper studies the problem of learning the correlation structure of a set of intervention functions defined on the directed acyclic graph (DAG) of a causal model.
1 code implementation • 18 Jul 2020 • Purva Pruthi, Javier González, Xiaoyu Lu, Madalina Fiterau
Human beings learn causal models and constantly use them to transfer knowledge between similar environments.
no code implementations • 6 Jun 2020 • Siu Lun Chau, Javier González, Dino Sejdinovic
We revisit widely used preferential Gaussian processes by Chu et al.(2005) and challenge their modelling assumption that imposes rankability of data items via latent utility function values.
no code implementations • 24 May 2020 • Virginia Aglietti, Xiaoyu Lu, Andrei Paleyes, Javier González
This paper studies the problem of globally optimizing a variable of interest that is part of a causal model in which a sequence of interventions can be performed.
no code implementations • 4 Feb 2020 • Henry B. Moss, Vatsal Aggarwal, Nishant Prateek, Javier González, Roberto Barra-Chicote
We present BOFFIN TTS (Bayesian Optimization For FIne-tuning Neural Text To Speech), a novel approach for few-shot speaker adaptation.
no code implementations • 28 Jan 2020 • David Janz, David R. Burt, Javier González
We consider the problem of optimising functions in the reproducing kernel Hilbert space (RKHS) of a Mat\'ern kernel with smoothness parameter $\nu$ over the domain $[0, 1]^d$ under noisy bandit feedback.
1 code implementation • 18 Mar 2019 • Kurt Cutajar, Mark Pullin, Andreas Damianou, Neil Lawrence, Javier González
Multi-fidelity methods are prominently used when cheaply-obtained, but possibly biased and noisy, observations must be effectively combined with limited or expensive true data in order to construct reliable models.
no code implementations • ICML 2017 • Rodolphe Jenatton, Cedric Archambeau, Javier González, Matthias Seeger
The benefit of leveraging this structure is twofold: we explore the search space more efficiently and posterior inference scales more favorably with the number of observations than Gaussian Process-based approaches published in the literature.
no code implementations • ICML 2017 • Javier González, Zhenwen Dai, Andreas Damianou, Neil D. Lawrence
We present a new framework for this scenario that we call Preferential Bayesian Optimization (PBO) and that allows to find the optimum of a latent function that can only be queried through pairwise comparisons, so-called duels.
1 code implementation • 4 Apr 2017 • Eero Siivola, Aki Vehtari, Jarno Vanhatalo, Javier González, Michael Riis Andersen
Bayesian optimization (BO) is a global optimization strategy designed to find the minimum of an expensive black-box function, typically defined on a compact subset of $\mathcal{R}^d$, by using a Gaussian process (GP) as a surrogate model for the objective.
no code implementations • 19 Nov 2015 • Zhenwen Dai, Andreas Damianou, Javier González, Neil Lawrence
We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model.
no code implementations • 21 Oct 2015 • Javier González, Michael Osborne, Neil D. Lawrence
We present GLASSES: Global optimisation with Look-Ahead through Stochastic Simulation and Expected-loss Search.
1 code implementation • 29 May 2015 • Javier González, Zhenwen Dai, Philipp Hennig, Neil D. Lawrence
The approach assumes that the function of interest, $f$, is a Lipschitz continuous function.
no code implementations • 7 May 2015 • Javier González, Joseph Longworth, David C. James, Neil D. Lawrence
We address the problem of synthetic gene design using Bayesian optimization.
no code implementations • 14 Nov 2013 • Javier González, Ivan Vujačić, Ernst Wit
Non-linear systems of differential equations have attracted the interest in fields like system biology, ecology or biochemistry, due to their flexibility and their ability to describe dynamical systems.