Search Results for author: Geoffrey J. McLachlan

Found 10 papers, 0 papers with code

Functional Mixtures-of-Experts

no code implementations4 Feb 2022 Faïcel Chamroukhi, Nhat Thien Pham, Van Hà Hoang, Geoffrey J. McLachlan

We extend the modeling with Mixtures-of-Experts (ME), as a framework of choice in modeling heterogeneity in data for prediction with vectorial observations, to this functional data analysis context.

Clustering Time Series +1

Semi-Supervised Learning of Classifiers from a Statistical Perspective: A Brief Review

no code implementations8 Apr 2021 Daniel Ahfock, Geoffrey J. McLachlan

There has been increasing attention to semi-supervised learning (SSL) approaches in machine learning to forming a classifier in situations where the training data for a classifier consists of a limited number of classified observations but a much larger number of unclassified observations.

Harmless label noise and informative soft-labels in supervised classification

no code implementations7 Apr 2021 Daniel Ahfock, Geoffrey J. McLachlan

In the framework of model-based classification, a simple, but key observation is that when the manual labels are sampled using the posterior probabilities of class membership, the noisy labels are as valuable as the ground-truth labels in terms of statistical information.

Classification General Classification +1

Non-asymptotic oracle inequalities for the Lasso in high-dimensional mixture of experts

no code implementations22 Sep 2020 TrungTin Nguyen, Hien D. Nguyen, Faicel Chamroukhi, Geoffrey J. McLachlan

Mixture of experts (MoE) has a well-principled finite mixture model construction for prediction, allowing the gating network (mixture weights) to learn from the predictors (explanatory variables) together with the experts' network (mixture component densities).

feature selection Model Selection +1

Estimation of Classification Rules from Partially Classified Data

no code implementations13 Apr 2020 Geoffrey J. McLachlan, Daniel Ahfock

For class-conditional distributions taken to be known up to a vector of unknown parameters, the aim is to estimate the Bayes' rule of allocation for the allocation of subsequent unclassified observations.

Classification General Classification +1

Deep Gaussian Mixture Models

no code implementations18 Nov 2017 Cinzia Viroli, Geoffrey J. McLachlan

Deep learning is a hierarchical inference method formed by subsequent multiple layers of learning able to more efficiently describe complex relationships.

Dimensionality Reduction

A Universal Approximation Theorem for Mixture of Experts Models

no code implementations11 Feb 2016 Hien D. Nguyen, Luke R Lloyd-Jones, Geoffrey J. McLachlan

The mixture of experts (MoE) model is a popular neural network architecture for nonlinear regression and classification.

General Classification regression

Supervised Classification of Flow Cytometric Samples via the Joint Clustering and Matching (JCM) Procedure

no code implementations11 Nov 2014 Sharon X. Lee, Geoffrey J. McLachlan, Saumyadipta Pyne

We consider the use of the Joint Clustering and Matching (JCM) procedure for the supervised classification of a flow cytometric sample with respect to a number of predefined classes of such samples.

Clustering General Classification

Joint Modeling and Registration of Cell Populations in Cohorts of High-Dimensional Flow Cytometric Data

no code implementations31 May 2013 Saumyadipta Pyne, Kui Wang, Jonathan Irish, Pablo Tamayo, Marc-Danie Nazaire, Tarn Duong, Sharon Lee, Shu-Kay Ng, David Hafler, Ronald Levy, Garry Nolan, Jill Mesirov, Geoffrey J. McLachlan

Simultaneously, JCM fits a random-effects model to construct an overall batch template -- used for registering populations across samples, and classifying new samples.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.