no code implementations • 14 Oct 2024 • Sean Lamont, Christian Walder, Amir Dezfouli, Paul Montague, Michael Norrish
We first demonstrate that it is possible to generate semantically aware tactic representations which capture the effect on the proving environment, likelihood of success and execution time.
1 code implementation • 30 Apr 2024 • Ben Harwood, Amir Dezfouli, Iadine Chades, Conrad Sanderson
For online feature learning, the Scalable Nearest Neighbours method is faster than baseline for recall rates below 75%.
no code implementations • 6 Mar 2024 • Sean Lamont, Michael Norrish, Amir Dezfouli, Christian Walder, Paul Montague
We also provide a qualitative analysis, illustrating that improved performance is associated with more semantically-aware embeddings.
no code implementations • 29 May 2023 • Tom Blau, Iadine Chades, Amir Dezfouli, Daniel Steinberg, Edwin V. Bonilla
We propose the use of an alternative estimator based on the cross-entropy of the joint model distribution and a flexible proposal distribution.
1 code implementation • 20 Feb 2023 • He Zhao, Ke Sun, Amir Dezfouli, Edwin Bonilla
The key to missing value imputation is to capture the data distribution with incomplete samples and impute the missing values accordingly.
no code implementations • NeurIPS 2023 • Ryan Thompson, Amir Dezfouli, Robert Kohn
With this capability gap in mind, we study a not-uncommon situation where the input features dichotomize into two groups: explanatory features, which are candidates for inclusion as variables in an interpretable model, and contextual features, which select from the candidate variables and determine their effects.
no code implementations • 10 Feb 2022 • Yan Zuo, Amir Dezfouli, Iadine Chades, David Alexander, Benjamin Ward Muir
Many real-world optimisation problems are defined over both categorical and continuous variables, yet efficient optimisation methods such asBayesian Optimisation (BO) are not designed tohandle such mixed-variable search spaces.
1 code implementation • 2 Feb 2022 • Tom Blau, Edwin V. Bonilla, Iadine Chades, Amir Dezfouli
Bayesian approaches developed to solve the optimal design of sequential experiments are mathematically elegant but computationally challenging.
no code implementations • NeurIPS 2021 • Minchao Wu, Michael Norrish, Christian Walder, Amir Dezfouli
We propose a novel approach to interactive theorem-proving (ITP) using deep reinforcement learning.
no code implementations • 28 Jun 2020 • Jaykumar Sheth, Cyrus Miremadi, Amir Dezfouli, Behnam Dezfouli
Unfortunately, the main energy efficiency mechanisms of 802. 11, namely PSM and APSD, fall short when used in IoT applications.
1 code implementation • NeurIPS 2019 • Amir Dezfouli, Hassan Ashtiani, Omar Ghattas, Richard Nock, Peter Dayan, Cheng Soon Ong
Individual characteristics in human decision-making are often quantified by fitting a parametric cognitive model to subjects' behavior and then studying differences between them in the associated parameter space.
no code implementations • NeurIPS 2018 • Amir Dezfouli, Richard Morris, Fabio T. Ramos, Peter Dayan, Bernard Balleine
One standard approach to this is model-based fMRI data analysis, in which a model is fitted to the behavioral data, i. e., a subject's choices, and then the neural data are parsed to find brain regions whose BOLD signals are related to the model's internal signals.
no code implementations • ICML 2018 • Amir Dezfouli, Edwin Bonilla, Richard Nock
Traditional methods for the discovery of latent network structures are limited in two ways: they either assume that all the signal comes from the network (i. e. there is no source of signal outside the network) or they place constraints on the network parameters to ensure model or algorithmic stability.
no code implementations • 27 Feb 2017 • Amir Dezfouli, Edwin V. Bonilla, Richard Nock
We propose a network structure discovery model for continuous observations that generalizes linear causal models by incorporating a Gaussian process (GP) prior on a network-independent component, and random sparsity and weight matrices as the network-dependent parameters.
no code implementations • 14 Sep 2016 • Pietro Galliani, Amir Dezfouli, Edwin V. Bonilla, Novi Quadrianto
We develop an automated variational inference method for Bayesian structured prediction problems with Gaussian process (GP) priors and linear-chain likelihoods.
1 code implementation • 2 Sep 2016 • Edwin V. Bonilla, Karl Krauth, Amir Dezfouli
We evaluate our approach quantitatively and qualitatively with experiments on small datasets, medium-scale datasets and large datasets, showing its competitiveness under different likelihood models and sparsity levels.
no code implementations • NeurIPS 2015 • Amir Dezfouli, Edwin V. Bonilla
We propose a sparse method for scalable automated variational inference (AVI) in a large class of models with Gaussian process (GP) priors, multiple latent functions, multiple outputs and non-linear likelihoods.