no code implementations • 4 Feb 2022 • Faïcel Chamroukhi, Nhat Thien Pham, Van Hà Hoang, Geoffrey J. McLachlan
We extend the modeling with Mixtures-of-Experts (ME), as a framework of choice in modeling heterogeneity in data for prediction with vectorial observations, to this functional data analysis context.
no code implementations • 8 Apr 2021 • Daniel Ahfock, Geoffrey J. McLachlan
There has been increasing attention to semi-supervised learning (SSL) approaches in machine learning to forming a classifier in situations where the training data for a classifier consists of a limited number of classified observations but a much larger number of unclassified observations.
no code implementations • 7 Apr 2021 • Daniel Ahfock, Geoffrey J. McLachlan
In the framework of model-based classification, a simple, but key observation is that when the manual labels are sampled using the posterior probabilities of class membership, the noisy labels are as valuable as the ground-truth labels in terms of statistical information.
no code implementations • 22 Sep 2020 • TrungTin Nguyen, Hien D. Nguyen, Faicel Chamroukhi, Geoffrey J. McLachlan
Mixture of experts (MoE) has a well-principled finite mixture model construction for prediction, allowing the gating network (mixture weights) to learn from the predictors (explanatory variables) together with the experts' network (mixture component densities).
no code implementations • 13 Apr 2020 • Geoffrey J. McLachlan, Daniel Ahfock
For class-conditional distributions taken to be known up to a vector of unknown parameters, the aim is to estimate the Bayes' rule of allocation for the allocation of subsequent unclassified observations.
no code implementations • 18 Nov 2017 • Cinzia Viroli, Geoffrey J. McLachlan
Deep learning is a hierarchical inference method formed by subsequent multiple layers of learning able to more efficiently describe complex relationships.
no code implementations • 12 May 2017 • Hien D. Nguyen, Geoffrey J. McLachlan
Support vector machines (SVMs) are an important tool in modern data analysis.
no code implementations • 11 Feb 2016 • Hien D. Nguyen, Luke R Lloyd-Jones, Geoffrey J. McLachlan
The mixture of experts (MoE) model is a popular neural network architecture for nonlinear regression and classification.
no code implementations • 11 Nov 2014 • Sharon X. Lee, Geoffrey J. McLachlan, Saumyadipta Pyne
We consider the use of the Joint Clustering and Matching (JCM) procedure for the supervised classification of a flow cytometric sample with respect to a number of predefined classes of such samples.
no code implementations • 31 May 2013 • Saumyadipta Pyne, Kui Wang, Jonathan Irish, Pablo Tamayo, Marc-Danie Nazaire, Tarn Duong, Sharon Lee, Shu-Kay Ng, David Hafler, Ronald Levy, Garry Nolan, Jill Mesirov, Geoffrey J. McLachlan
Simultaneously, JCM fits a random-effects model to construct an overall batch template -- used for registering populations across samples, and classifying new samples.