1 code implementation • 9 Dec 2022 • Jake Fawkes, Robert Hu, Robin J. Evans, Dino Sejdinovic
These improved estimators are inspired by doubly robust estimators of the causal mean, using a similar form within the kernel space.
1 code implementation • 26 May 2022 • Robert Hu, Siu Lun Chau, Jaime Ferrando Huertas, Dino Sejdinovic
While preference modelling is becoming one of the pillars of machine learning, the problem of preference explanation remains challenging and underexplored.
1 code implementation • 12 May 2022 • Veit D. Wild, Robert Hu, Dino Sejdinovic
We develop a framework for generalized variational inference in infinite-dimensional function spaces and use it to construct a method termed Gaussian Wasserstein inference (GWI).
no code implementations • 2 Feb 2022 • Robert Hu, Siu Lun Chau, Dino Sejdinovic, Joan Alexis Glaunès
Kernel matrix-vector multiplication (KMVM) is a foundational operation in machine learning and scientific computing.
1 code implementation • 25 Nov 2021 • Robert Hu, Dino Sejdinovic, Robin J. Evans
Causal inference grows increasingly complex as the number of confounders increases.
no code implementations • 18 Oct 2021 • Siu Lun Chau, Robert Hu, Javier Gonzalez, Dino Sejdinovic
Feature attribution for kernel methods is often heuristic and not individualised for each prediction.
2 code implementations • 26 Mar 2021 • David Rindt, Robert Hu, David Steinsaltz, Dino Sejdinovic
We consider frequently used scoring rules for right-censored survival regression models such as time-dependent concordance, survival-CRPS, integrated Brier score and integrated binomial log-likelihood, and prove that neither of them is a proper scoring rule.
no code implementations • 11 Feb 2020 • Robert Hu, Geoff K. Nicholls, Dino Sejdinovic
We outline an inherent weakness of tensor factorization models when latent factors are expressed as a function of side information and propose a novel method to mitigate this weakness.