Search Results for author: Amir Dezfouli

Found 17 papers, 5 papers with code

3D-Prover: Diversity Driven Theorem Proving With Determinantal Point Processes

no code implementations14 Oct 2024 Sean Lamont, Christian Walder, Amir Dezfouli, Paul Montague, Michael Norrish

We first demonstrate that it is possible to generate semantically aware tactic representations which capture the effect on the proving environment, likelihood of success and execution time.

Automated Theorem Proving Diversity +2

Approximate Nearest Neighbour Search on Dynamic Datasets: An Investigation

1 code implementation30 Apr 2024 Ben Harwood, Amir Dezfouli, Iadine Chades, Conrad Sanderson

For online feature learning, the Scalable Nearest Neighbours method is faster than baseline for recall rates below 75%.

BAIT: Benchmarking (Embedding) Architectures for Interactive Theorem-Proving

no code implementations6 Mar 2024 Sean Lamont, Michael Norrish, Amir Dezfouli, Christian Walder, Paul Montague

We also provide a qualitative analysis, illustrating that improved performance is associated with more semantically-aware embeddings.

Automated Theorem Proving Benchmarking

Statistically Efficient Bayesian Sequential Experiment Design via Reinforcement Learning with Cross-Entropy Estimators

no code implementations29 May 2023 Tom Blau, Iadine Chades, Amir Dezfouli, Daniel Steinberg, Edwin V. Bonilla

We propose the use of an alternative estimator based on the cross-entropy of the joint model distribution and a flexible proposal distribution.

reinforcement-learning

Transformed Distribution Matching for Missing Value Imputation

1 code implementation20 Feb 2023 He Zhao, Ke Sun, Amir Dezfouli, Edwin Bonilla

The key to missing value imputation is to capture the data distribution with incomplete samples and impute the missing values accordingly.

Imputation Missing Values

The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

no code implementations NeurIPS 2023 Ryan Thompson, Amir Dezfouli, Robert Kohn

With this capability gap in mind, we study a not-uncommon situation where the input features dichotomize into two groups: explanatory features, which are candidates for inclusion as variables in an interpretable model, and contextual features, which select from the candidate variables and determine their effects.

Decision Making Interpretable Machine Learning

Bayesian Optimisation for Mixed-Variable Inputs using Value Proposals

no code implementations10 Feb 2022 Yan Zuo, Amir Dezfouli, Iadine Chades, David Alexander, Benjamin Ward Muir

Many real-world optimisation problems are defined over both categorical and continuous variables, yet efficient optimisation methods such asBayesian Optimisation (BO) are not designed tohandle such mixed-variable search spaces.

Bayesian Optimisation

Optimizing Sequential Experimental Design with Deep Reinforcement Learning

1 code implementation2 Feb 2022 Tom Blau, Edwin V. Bonilla, Iadine Chades, Amir Dezfouli

Bayesian approaches developed to solve the optimal design of sequential experiments are mathematically elegant but computationally challenging.

Deep Reinforcement Learning Experimental Design +2

EAPS: Edge-Assisted Predictive Sleep Scheduling for 802.11 IoT Stations

no code implementations28 Jun 2020 Jaykumar Sheth, Cyrus Miremadi, Amir Dezfouli, Behnam Dezfouli

Unfortunately, the main energy efficiency mechanisms of 802. 11, namely PSM and APSD, fall short when used in IoT applications.

Scheduling

Disentangled behavioural representations

1 code implementation NeurIPS 2019 Amir Dezfouli, Hassan Ashtiani, Omar Ghattas, Richard Nock, Peter Dayan, Cheng Soon Ong

Individual characteristics in human decision-making are often quantified by fitting a parametric cognitive model to subjects' behavior and then studying differences between them in the associated parameter space.

Decision Making

Integrated accounts of behavioral and neuroimaging data using flexible recurrent neural network models

no code implementations NeurIPS 2018 Amir Dezfouli, Richard Morris, Fabio T. Ramos, Peter Dayan, Bernard Balleine

One standard approach to this is model-based fMRI data analysis, in which a model is fitted to the behavioral data, i. e., a subject's choices, and then the neural data are parsed to find brain regions whose BOLD signals are related to the model's internal signals.

Decision Making

Variational Network Inference: Strong and Stable with Concrete Support

no code implementations ICML 2018 Amir Dezfouli, Edwin Bonilla, Richard Nock

Traditional methods for the discovery of latent network structures are limited in two ways: they either assume that all the signal comes from the network (i. e. there is no source of signal outside the network) or they place constraints on the network parameters to ensure model or algorithmic stability.

Semi-parametric Network Structure Discovery Models

no code implementations27 Feb 2017 Amir Dezfouli, Edwin V. Bonilla, Richard Nock

We propose a network structure discovery model for continuous observations that generalizes linear causal models by incorporating a Gaussian process (GP) prior on a network-independent component, and random sparsity and weight matrices as the network-dependent parameters.

Uncertainty Quantification Variational Inference

Gray-box inference for structured Gaussian process models

no code implementations14 Sep 2016 Pietro Galliani, Amir Dezfouli, Edwin V. Bonilla, Novi Quadrianto

We develop an automated variational inference method for Bayesian structured prediction problems with Gaussian process (GP) priors and linear-chain likelihoods.

Stochastic Optimization Structured Prediction +1

Generic Inference in Latent Gaussian Process Models

1 code implementation2 Sep 2016 Edwin V. Bonilla, Karl Krauth, Amir Dezfouli

We evaluate our approach quantitatively and qualitatively with experiments on small datasets, medium-scale datasets and large datasets, showing its competitiveness under different likelihood models and sparsity levels.

General Classification Stochastic Optimization

Scalable Inference for Gaussian Process Models with Black-Box Likelihoods

no code implementations NeurIPS 2015 Amir Dezfouli, Edwin V. Bonilla

We propose a sparse method for scalable automated variational inference (AVI) in a large class of models with Gaussian process (GP) priors, multiple latent functions, multiple outputs and non-linear likelihoods.

General Classification regression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.