no code implementations • 13 Mar 2024 • Gautham Govind Anil, Pascal Esser, Debarghya Ghoshdastidar
We provide the first convergence results of NTK for contrastive losses, and present a nuanced picture: NTK of wide networks remains almost constant for cosine similarity based contrastive losses, but not for losses based on dot product similarity.
1 code implementation • 18 Oct 2022 • Mahalakshmi Sabanayagam, Pascal Esser, Debarghya Ghoshdastidar
The fundamental principle of Graph Neural Networks (GNNs) is to exploit the structural information of the data by aggregating the neighboring nodes using a `graph convolution' in conjunction with a suitable choice for the network architecture, such as depth and activation functions.
no code implementations • 8 Oct 2021 • Mahalakshmi Sabanayagam, Pascal Esser, Debarghya Ghoshdastidar
This paper focuses on semi-supervised learning on graphs, and explains the above observations through the lens of Neural Tangent Kernels (NTKs).