no code implementations • 1 Feb 2023 • Hanyu Zhang, Marina Meila
We quantify the parameter stability of a spherical Gaussian Mixture Model (sGMM) under small perturbations in distribution space.
1 code implementation • 1 Feb 2023 • Hanyu Zhang, Samson Koelle, Marina Meila
We propose a paradigm for interpretable Manifold Learning for scientific data analysis, whereby we parametrize a manifold with $d$ smooth functions from a scientist-provided dictionary of meaningful, domain-related functions.
no code implementations • 26 Apr 2022 • Nikolaos Evangelou, Felix Dietrich, Eliodoro Chiavazzo, Daniel Lehmberg, Marina Meila, Ioannis G. Kevrekidis
A second round of Diffusion Maps on those latent coordinates allows the approximation of the reduced dynamical models.
no code implementations • ICML Workshop URL 2021 • James Buenfil, Samson J Koelle, Marina Meila
The biasing of dynamical simulations along collective variables uncovered by unsupervised learning has become a standard approach in analysis of molecular systems.
no code implementations • NeurIPS 2015 • Yali Wan, Marina Meila
Finding communities in networks is a problem that remains difficult, in spite of the amount of attention it has recently received.
no code implementations • 18 Jun 2020 • Marina Meila
Meila (2018) introduces an optimization based method called the Sublevel Set method, to guarantee that a clustering is nearly optimal and "approximately correct" without relying on any assumptions about the distribution that generated the data.
no code implementations • 3 Dec 2018 • Yali Wan, Marina Meila
In this paper, we propose a perturbation framework to measure the robustness of graph properties.
no code implementations • NeurIPS 2018 • Marina Meila
We introduce the Sublevel Set (SS) method, a generic method to obtain sufficient guarantees of near-optimality and uniqueness (up to small perturbations) for a clustering.
2 code implementations • 29 Nov 2018 • Samson Koelle, Hanyu Zhang, Marina Meila, Yu-Chia Chen
Manifold embedding algorithms map high-dimensional data down to coordinates in a much lower-dimensional space.
no code implementations • NeurIPS 2016 • Yali Wan, Marina Meila
In this paper, we propose a framework, in which we obtain "correctness" guarantees without assuming the data comes from a model.
no code implementations • NeurIPS 2016 • James McQueen, Marina Meila, Dominique Joncas
Many manifold learning algorithms aim to create embeddings with low or no distortion (i. e. isometric).
1 code implementation • 9 Mar 2016 • James McQueen, Marina Meila, Jacob VanderPlas, Zhongyue Zhang
Manifold Learning is a class of algorithms seeking a low-dimensional non-linear representation of high-dimensional data.
no code implementations • NeurIPS 2014 • Christopher Meek, Marina Meila
We develop a new exponential family probabilistic model for permutations that can capture hierarchical structure, and that has the well known Mallows and generalized Mallows models as subclasses.
no code implementations • 27 Nov 2014 • Zaeem Hussain, Marina Meila
The motivation for looking at new ways for comparing clusterings stems from the fact that the existing clustering indices are based on set cardinality alone and do not consider the positions of data points.
no code implementations • NeurIPS 2017 • Dominique Perrault-Joncas, Marina Meila
We address the problem of setting the kernel bandwidth used by Manifold Learning algorithms to construct the graph Laplacian.
no code implementations • 30 May 2014 • Dominique Perrault-Joncas, Marina Meila
This paper considers the problem of embedding directed graphs in Euclidean space while retaining directional information.
no code implementations • 6 Dec 2013 • Hoyt Koepke, Marina Meila
In Part 2, we describe the full regularization path of a class of penalized regression problems with dependent variables that includes the graph-guided LASSO and total variation constrained models.
no code implementations • 30 May 2013 • Dominique Perraul-Joncas, Marina Meila
In recent years, manifold learning has become increasingly popular as a tool for performing non-linear dimensionality reduction.
no code implementations • 30 Jan 2013 • Marina Meila, David Heckerman
In the first part of the paper, we perform an experimental comparison between three batch clustering algorithms: the Expectation-Maximization (EM) algorithm, a winner take all version of the EM algorithm reminiscent of the K-means algorithm, and model-based hierarchical agglomerative clustering.
no code implementations • NeurIPS 2011 • Dominique C. Perrault-Joncas, Marina Meila
This paper considers the problem of embedding directed graphs in Euclidean space while retaining directional information.