Search Results for author: Soledad Villar

Found 32 papers, 16 papers with code

Approximately Equivariant Graph Networks

1 code implementation NeurIPS 2023 Ningyuan Huang, Ron Levie, Soledad Villar

However, these two symmetries are fundamentally different: The translation equivariance of CNNs corresponds to symmetries of the fixed domain acting on the image signals (sometimes known as active symmetries), whereas in GNNs any permutation acts on both the graph signals and the graph domain (sometimes described as passive symmetries).

Image Inpainting Pose Estimation +1

Structuring Representation Geometry with Rotationally Equivariant Contrastive Learning

1 code implementation24 Jun 2023 Sharut Gupta, Joshua Robinson, Derek Lim, Soledad Villar, Stefanie Jegelka

Specifically, in the contrastive learning setting, we introduce an equivariance objective and theoretically prove that its minima forces augmentations on input space to correspond to rotations on the spherical embedding space.

Contrastive Learning Self-Supervised Learning

Fine-grained Expressivity of Graph Neural Networks

1 code implementation NeurIPS 2023 Jan Böker, Ron Levie, Ningyuan Huang, Soledad Villar, Christopher Morris

In particular, we characterize the expressive power of MPNNs in terms of the tree distance, which is a graph distance based on the concept of fractional isomorphisms, and substructure counts via tree homomorphisms, showing that these concepts have the same expressive power as the $1$-WL and MPNNs on graphons.

GeometricImageNet: Extending convolutional neural networks to vector and tensor images

1 code implementation21 May 2023 Wilson Gregory, David W. Hogg, Ben Blum-Smith, Maria Teresa Arias, Kaze W. K. Wong, Soledad Villar

We use representation theory to quantify the dimension of the space of equivariant polynomial functions on 2-dimensional vector images.

Towards fully covariant machine learning

no code implementations31 Jan 2023 Soledad Villar, David W. Hogg, Weichi Yao, George A. Kevrekidis, Bernhard Schölkopf

We discuss links to causal modeling, and argue that the implementation of passive symmetries is particularly valuable when the goal of the learning problem is to generalize out of sample.

Sketch-and-solve approaches to k-means clustering by semidefinite programming

1 code implementation28 Nov 2022 Charles Clum, Dustin G. Mixon, Soledad Villar, Kaiying Xie

This lower bound is data-driven; it does not make any assumption on the data nor how it is generated.

Clustering

A Spectral Analysis of Graph Neural Networks on Dense and Sparse Graphs

1 code implementation6 Nov 2022 Luana Ruiz, Ningyuan Huang, Soledad Villar

In this work we propose a random graph model that can produce graphs at different levels of sparsity.

Community Detection Node Classification

Deep Learning is Provably Robust to Symmetric Label Noise

no code implementations26 Oct 2022 Carey E. Priebe, Ningyuan Huang, Soledad Villar, Cong Mu, Li Chen

We conjecture that for general label noise, mitigation strategies that make use of the noisy data will outperform those that ignore the noisy data.

Memorization

Shuffled linear regression through graduated convex relaxation

no code implementations30 Sep 2022 Efe Onaran, Soledad Villar

The shuffled linear regression problem aims to recover linear relationships in datasets where the correspondence between input and output is unknown.

regression

Machine learning and invariant theory

no code implementations29 Sep 2022 Ben Blum-Smith, Soledad Villar

Inspired by constraints from physical law, equivariant machine learning restricts the learning to a hypothesis class where all the functions are equivariant with respect to some group action.

MarkerMap: nonlinear marker selection for single-cell studies

no code implementations28 Jul 2022 Nabeel Sarwar, Wilson Gregory, George A Kevrekidis, Soledad Villar, Bianca Dumitrascu

Single-cell RNA-seq data allow the quantification of cell type differences across a growing set of biological contexts.

Imputation Vocal Bursts Type Prediction

Dimensionless machine learning: Imposing exact units equivariance

1 code implementation2 Apr 2022 Soledad Villar, Weichi Yao, David W. Hogg, Ben Blum-Smith, Bianca Dumitrascu

Units equivariance (or units covariance) is the exact symmetry that follows from the requirement that relationships among measured quantities of physics relevance must obey self-consistent dimensional scalings.

BIG-bench Machine Learning Symbolic Regression

A Short Tutorial on The Weisfeiler-Lehman Test And Its Variants

no code implementations18 Jan 2022 Ningyuan Huang, Soledad Villar

Graph neural networks are designed to learn functions on graphs.

Scalars are universal: Equivariant machine learning, structured like classical physics

2 code implementations NeurIPS 2021 Soledad Villar, David W. Hogg, Kate Storey-Fisher, Weichi Yao, Ben Blum-Smith

There has been enormous progress in the last few years in designing neural networks that respect the fundamental symmetries and coordinate freedoms of physical law.

BIG-bench Machine Learning Translation

Fitting very flexible models: Linear regression with large numbers of parameters

no code implementations15 Jan 2021 David W. Hogg, Soledad Villar

We emphasize that it is often valuable to choose far more parameters than data points, despite folk rules to the contrary: Suitably regularized models with enormous numbers of parameters generalize well and make good predictions for held-out data; over-fitting is not (mainly) a problem of having too many parameters.

Denoising Model Selection +2

Dimensionality reduction, regularization, and generalization in overparameterized regressions

1 code implementation23 Nov 2020 Ningyuan Huang, David W. Hogg, Soledad Villar

This realization brought back the study of linear models for regression, including ordinary least squares (OLS), which, like deep learning, shows a "double-descent" behavior: (1) The risk (expected out-of-sample prediction error) can grow arbitrarily when the number of parameters $p$ approaches the number of samples $n$, and (2) the risk decreases with $p$ for $p>n$, sometimes achieving a lower value than the lowest risk for $p<n$.

Data Poisoning Dimensionality Reduction +1

Can Graph Neural Networks Count Substructures?

1 code implementation NeurIPS 2020 Zhengdao Chen, Lei Chen, Soledad Villar, Joan Bruna

We also prove positive results for k-WL and k-IGNs as well as negative results for k-WL with a finite number of iterations.

Isomorphism Testing

Experimental performance of graph neural networks on random instances of max-cut

no code implementations15 Aug 2019 Weichi Yao, Afonso S. Bandeira, Soledad Villar

In particular we consider Graph Neural Networks (GNNs) -- a class of neural networks designed to learn functions on graphs -- and we apply them to the max-cut problem on random regular graphs.

On the equivalence between graph isomorphism testing and function approximation with GNNs

1 code implementation NeurIPS 2019 Zhengdao Chen, Soledad Villar, Lei Chen, Joan Bruna

We further develop a framework of the expressive power of GNNs that incorporates both of these viewpoints using the language of sigma-algebra, through which we compare the expressive power of different types of GNNs together with other graph isomorphism tests.

Graph Regression Isomorphism Testing

SqueezeFit: Label-aware dimensionality reduction by semidefinite programming

1 code implementation6 Dec 2018 Culver McWhirter, Dustin G. Mixon, Soledad Villar

Given labeled points in a high-dimensional vector space, we seek a low-dimensional subspace such that projecting onto this subspace maintains some prescribed distance between points of differing labels.

Classification Dimensionality Reduction +1

SUNLayer: Stable denoising with generative networks

no code implementations25 Mar 2018 Dustin G. Mixon, Soledad Villar

It has been experimentally established that deep neural networks can be used to produce good generative models for real world data.

Image Denoising Super-Resolution

Monte Carlo approximation certificates for k-means clustering

no code implementations3 Oct 2017 Dustin G. Mixon, Soledad Villar

Efficient algorithms for $k$-means clustering frequently converge to suboptimal partitions, and given a partition, it is difficult to detect $k$-means optimality.

Clustering

Revised Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks

3 code implementations22 Jun 2017 Alex Nowak, Soledad Villar, Afonso S. Bandeira, Joan Bruna

Inverse problems correspond to a certain type of optimization problems formulated over appropriate input distributions.

A polynomial-time relaxation of the Gromov-Hausdorff distance

no code implementations17 Oct 2016 Soledad Villar, Afonso S. Bandeira, Andrew J. Blumberg, Rachel Ward

The Gromov-Hausdorff distance provides a metric on the set of isometry classes of compact metric spaces.

Clustering subgaussian mixtures by semidefinite programming

no code implementations22 Feb 2016 Dustin G. Mixon, Soledad Villar, Rachel Ward

We introduce a model-free relax-and-round algorithm for k-means clustering based on a semidefinite relaxation due to Peng and Wei.

Clustering

Probably certifiably correct k-means clustering

no code implementations26 Sep 2015 Takayuki Iguchi, Dustin G. Mixon, Jesse Peterson, Soledad Villar

First, we prove that Peng and Wei's semidefinite relaxation of k-means is tight with high probability under a distribution of planted clusters called the stochastic ball model.

Clustering

On the tightness of an SDP relaxation of k-means

no code implementations18 May 2015 Takayuki Iguchi, Dustin G. Mixon, Jesse Peterson, Soledad Villar

Recently, Awasthi et al. introduced an SDP relaxation of the $k$-means problem in $\mathbb R^m$.

Relax, no need to round: integrality of clustering formulations

no code implementations18 Aug 2014 Pranjal Awasthi, Afonso S. Bandeira, Moses Charikar, Ravishankar Krishnaswamy, Soledad Villar, Rachel Ward

Under the same distributional model, the $k$-means LP relaxation fails to recover such clusters at separation as large as $\Delta = 4$.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.