no code implementations • 30 Oct 2024 • Matthew Willetts, Christian Harrington
We find that, under this modeling approach, AMM pools (even with no retail/noise traders) often offer superior execution and rebalancing efficiency compared to centralised rebalancing, for all but the lowest CEX fee levels.
no code implementations • 23 Apr 2024 • Matthew Willetts, Christian Harrington
Maximal Extractable Value (MEV) in Constant Function Market Making is fairly well understood.
no code implementations • 27 Mar 2024 • Matthew Willetts, Christian Harrington
We then demonstrate this method on a range of market backtests, including simulating pool performance when trading fees are present, finding that the new approximately-optimal method of changing weights gives robust increases in pool performance.
no code implementations • 9 Feb 2024 • Matthew Willetts, Christian Harrington
Convex optimisation has provided a mechanism to determine arbitrage trades on automated market markets (AMMs) since almost their inception.
no code implementations • 19 Jan 2023 • Fabian Falck, Christopher Williams, Dominic Danks, George Deligiannidis, Christopher Yau, Chris Holmes, Arnaud Doucet, Matthew Willetts
U-Net architectures are ubiquitous in state-of-the-art deep learning, however their regularisation properties and relationship to wavelets are understudied.
1 code implementation • NeurIPS 2021 • Fabian Falck, Haoting Zhang, Matthew Willetts, George Nicholson, Christopher Yau, Chris Holmes
Work in deep clustering focuses on finding a single partition of data.
1 code implementation • 9 Jun 2021 • Matthew Willetts, Brooks Paige
Surprisingly, we discover side information is not necessary for algorithmic stability: using standard quantitative measures of identifiability, we find deep generative models with latent clusterings are empirically identifiable to the same degree as models which rely on auxiliary labels.
no code implementations • 31 May 2021 • Alexander Camuto, Matthew Willetts
We further demonstrate that adding Gaussian noise to the input of a VAE allows us to more finely control the frequency content and the Lipschitz constant of the VAE encoder networks.
no code implementations • 15 Feb 2021 • Ben Barrett, Alexander Camuto, Matthew Willetts, Tom Rainforth
We introduce an approach for training Variational Autoencoders (VAEs) that are certifiably robust to adversarial attack.
no code implementations • 14 Jul 2020 • Matthew Willetts, Xenia Miscouridou, Stephen Roberts, Chris Holmes
Successfully training Variational Autoencoders (VAEs) with a hierarchy of discrete latent variables remains an area of active research.
no code implementations • 14 Jul 2020 • Alexander Camuto, Matthew Willetts, Stephen Roberts, Chris Holmes, Tom Rainforth
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.
no code implementations • NeurIPS 2020 • Alexander Camuto, Matthew Willetts, Umut Şimşekli, Stephen Roberts, Chris Holmes
We study the regularisation induced in neural networks by Gaussian noise injections (GNIs).
no code implementations • 18 Feb 2020 • Alexander Camuto, Matthew Willetts, Brooks Paige, Chris Holmes, Stephen Roberts
Separating high-dimensional data like images into independent latent factors, i. e independent component analysis (ICA), remains an open research problem.
no code implementations • 30 Jan 2020 • Miguel Morin, Matthew Willetts
We show that the stochasticity in training ResNets for image classification on GPUs in TensorFlow is dominated by the non-determinism from GPUs, rather than by the initialisation of the weights and biases of the network or by the sequence of minibatches given.
no code implementations • 25 Sep 2019 • Matthew Willetts, Alexander Camuto, Stephen Roberts, Chris Holmes
We develop a new method for regularising neural networks.
no code implementations • 25 Sep 2019 • Matthew Willetts, Stephen Roberts, Chris Holmes
In clustering we normally output one cluster variable for each datapoint.
no code implementations • 25 Sep 2019 • Matthew Willetts, Alexander Camuto, Stephen Roberts, Chris Holmes
This paper is concerned with the robustness of VAEs to adversarial attacks.
no code implementations • ICLR 2021 • Matthew Willetts, Alexander Camuto, Tom Rainforth, Stephen Roberts, Chris Holmes
We make significant advances in addressing this issue by introducing methods for producing adversarially robust VAEs.
1 code implementation • 24 Jan 2019 • Matthew Willetts, Stephen J. Roberts, Christopher C. Holmes
It could easily be the case that some classes of data are found only in the unlabelled dataset -- perhaps the labelling process was biased -- so we do not have any labelled examples to train on for some classes.
1 code implementation • 29 Oct 2018 • Matthew Willetts, Aiden Doherty, Stephen Roberts, Chris Holmes
We introduce 'semi-unsupervised learning', a problem regime related to transfer learning and zero-shot learning where, in the training data, some classes are sparsely labelled and others entirely unlabelled.