Search Results for author: Christopher Yau

Found 13 papers, 7 papers with code

Neural Decomposition: Functional ANOVA with Variational Autoencoders

1 code implementation25 Jun 2020 Kaspar Märtens, Christopher Yau

Our goal is to provide a feature-level variance decomposition, i. e. to decompose variation in the data by separating out the marginal additive effects of latent variables z and fixed inputs c from their non-linear interactions.

Dimensionality Reduction

BasisVAE: Translation-invariant feature-level clustering with Variational Autoencoders

1 code implementation6 Mar 2020 Kaspar Märtens, Christopher Yau

Variational Autoencoders (VAEs) provide a flexible and scalable framework for non-linear dimensionality reduction.

Dimensionality Reduction Translation +1

Bayesian Nonparametric Boolean Factor Models

no code implementations28 Jun 2019 Tammo Rukat, Christopher Yau

We build upon probabilistic models for Boolean Matrix and Boolean Tensor factorisation that have recently been shown to solve these problems with unprecedented accuracy and to enable posterior inference to scale to Billions of observation.

Decomposing feature-level variation with Covariate Gaussian Process Latent Variable Models

2 code implementations16 Oct 2018 Kaspar Märtens, Kieran R. Campbell, Christopher Yau

The interpretation of complex high-dimensional data typically requires the use of dimensionality reduction techniques to extract explanatory low-dimensional representations.

Dimensionality Reduction

Probabilistic Boolean Tensor Decomposition

1 code implementation ICML 2018 Tammo Rukat, Chris Holmes, Christopher Yau

Boolean tensor decomposition approximates data of multi-way binary relationships as product of interpretable low-rank binary factors, following the rules Boolean algebra.

Model Selection Tensor Decomposition

TensOrMachine: Probabilistic Boolean Tensor Decomposition

1 code implementation11 May 2018 Tammo Rukat, Chris C. Holmes, Christopher Yau

Boolean tensor decomposition approximates data of multi-way binary relationships as product of interpretable low-rank binary factors, following the rules of Boolean algebra.

Model Selection Tensor Decomposition

Augmented Ensemble MCMC sampling in Factorial Hidden Markov Models

no code implementations24 Mar 2017 Kaspar Märtens, Michalis K. Titsias, Christopher Yau

Bayesian inference for factorial hidden Markov models is challenging due to the exponentially sized latent variable space.

Bayesian Inference

Testing and Learning on Distributions with Symmetric Noise Invariance

no code implementations NeurIPS 2017 Ho Chung Leon Law, Christopher Yau, Dino Sejdinovic

Kernel embeddings of distributions and the Maximum Mean Discrepancy (MMD), the resulting distance between distributions, are useful tools for fully nonparametric two-sample testing and learning on distributions.

Two-sample testing

Bayesian Boolean Matrix Factorisation

no code implementations ICML 2017 Tammo Rukat, Chris C. Holmes, Michalis K. Titsias, Christopher Yau

Boolean matrix factorisation aims to decompose a binary data matrix into an approximate Boolean product of two low rank, binary matrices: one containing meaningful patterns, the other quantifying how the observations can be expressed as a combination of these patterns.

Collaborative Filtering

Stratification of patient trajectories using covariate latent variable models

1 code implementation27 Oct 2016 Kieran R. Campbell, Christopher Yau

To learn such a continuous disease score one could infer a latent variable from dynamic "omics" data such as RNA-seq that correlates with an outcome of interest such as survival time.

Variational Inference

Hamming Ball Auxiliary Sampling for Factorial Hidden Markov Models

no code implementations NeurIPS 2014 Michalis Titsias Rc Aueb, Christopher Yau

We introduce a novel sampling algorithm for Markov chain Monte Carlo-based Bayesian inference for factorial hidden Markov models.

Bayesian Inference

Statistical Inference in Hidden Markov Models using $k$-segment Constraints

no code implementations5 Nov 2013 Michalis K. Titsias, Christopher Yau, Christopher C. Holmes

Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data.

Cannot find the paper you are looking for? You can Submit a new open access paper.