1 code implementation • NeurIPS 2023 • Ningyuan Huang, Ron Levie, Soledad Villar
However, these two symmetries are fundamentally different: The translation equivariance of CNNs corresponds to symmetries of the fixed domain acting on the image signals (sometimes known as active symmetries), whereas in GNNs any permutation acts on both the graph signals and the graph domain (sometimes described as passive symmetries).
1 code implementation • NeurIPS 2023 • Jan Böker, Ron Levie, Ningyuan Huang, Soledad Villar, Christopher Morris
In particular, we characterize the expressive power of MPNNs in terms of the tree distance, which is a graph distance based on the concept of fractional isomorphisms, and substructure counts via tree homomorphisms, showing that these concepts have the same expressive power as the $1$-WL and MPNNs on graphons.
1 code implementation • 6 Nov 2022 • Luana Ruiz, Ningyuan Huang, Soledad Villar
In this work we propose a random graph model that can produce graphs at different levels of sparsity.
no code implementations • 26 Oct 2022 • Carey E. Priebe, Ningyuan Huang, Soledad Villar, Cong Mu, Li Chen
We conjecture that for general label noise, mitigation strategies that make use of the noisy data will outperform those that ignore the noisy data.
1 code implementation • 24 Sep 2022 • Ningyuan Huang, Soledad Villar, Carey E. Priebe, Da Zheng, Chengyue Huang, Lin Yang, Vladimir Braverman
Graph Neural Networks (GNNs) are powerful deep learning methods for Non-Euclidean data.
1 code implementation • 27 Jun 2022 • Ningyuan Huang, Yash R. Deshpande, Yibo Liu, Houda Alberts, Kyunghyun Cho, Clara Vania, Iacer Calixto
We use the recently released VisualSem KG as our external knowledge repository, which covers a subset of Wikipedia and WordNet entities, and compare a mix of tuple-based and graph-based algorithms to learn entity and relation representations that are grounded on the KG multimodal information.
Multilingual Named Entity Recognition named-entity-recognition +2
no code implementations • 28 May 2022 • Li Chen, Ningyuan Huang, Cong Mu, Hayden S. Helm, Kate Lytvynets, Weiwei Yang, Carey E. Priebe
Our hierarchical approach improves upon regular deep neural networks in learning with label noise.
no code implementations • 18 Jan 2022 • Ningyuan Huang, Soledad Villar
Graph neural networks are designed to learn functions on graphs.
1 code implementation • 23 Nov 2020 • Ningyuan Huang, David W. Hogg, Soledad Villar
This realization brought back the study of linear models for regression, including ordinary least squares (OLS), which, like deep learning, shows a "double-descent" behavior: (1) The risk (expected out-of-sample prediction error) can grow arbitrarily when the number of parameters $p$ approaches the number of samples $n$, and (2) the risk decreases with $p$ for $p>n$, sometimes achieving a lower value than the lowest risk for $p<n$.
no code implementations • 25 Oct 2020 • Carey E. Priebe, Cencheng Shen, Ningyuan Huang, Tianyi Chen
Neural networks have achieved remarkable successes in machine learning tasks.