Search Results for author: Mahalakshmi Sabanayagam

Found 9 papers, 5 papers with code

Exact Certification of (Graph) Neural Networks Against Label Poisoning

no code implementations30 Nov 2024 Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar

Machine learning models are highly vulnerable to label flipping, i. e., the adversarial modification (poisoning) of training labels to compromise performance.

Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks

1 code implementation15 Jul 2024 Lukas Gosch, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar, Stephan Günnemann

Generalization of machine learning models can be severely compromised by data poisoning, where adversarial changes are applied to the training data.

Bilevel Optimization Data Poisoning

Robust Feature Inference: A Test-time Defense Strategy using Spectral Projections

1 code implementation21 Jul 2023 Anurag Singh, Mahalakshmi Sabanayagam, Krikamol Muandet, Debarghya Ghoshdastidar

Test-time defenses are used to improve the robustness of deep neural networks to adversarial examples during inference.

Kernels, Data & Physics

no code implementations5 Jul 2023 Francesco Cagnetta, Deborah Oliveira, Mahalakshmi Sabanayagam, Nikolaos Tsilivis, Julia Kempe

Lecture notes from the course given by Professor Julia Kempe at the summer school "Statistical physics of Machine Learning" in Les Houches.

Adversarial Robustness Inductive Bias

Unveiling the Hessian's Connection to the Decision Boundary

1 code implementation12 Jun 2023 Mahalakshmi Sabanayagam, Freya Behrens, Urte Adomaityte, Anna Dawid

Based on this finding, we provide a new and straightforward approach to studying the complexity of a high-dimensional decision boundary; show that this connection naturally inspires a new generalization measure; and finally, we develop a novel margin estimation technique which, in combination with the generalization measure, precisely identifies minima with simple wide-margin boundaries.

Analysis of Convolutions, Non-linearity and Depth in Graph Neural Networks using Neural Tangent Kernel

1 code implementation18 Oct 2022 Mahalakshmi Sabanayagam, Pascal Esser, Debarghya Ghoshdastidar

The fundamental principle of Graph Neural Networks (GNNs) is to exploit the structural information of the data by aggregating the neighboring nodes using a `graph convolution' in conjunction with a suitable choice for the network architecture, such as depth and activation functions.

Node Classification Stochastic Block Model

New Insights into Graph Convolutional Networks using Neural Tangent Kernels

no code implementations8 Oct 2021 Mahalakshmi Sabanayagam, Pascal Esser, Debarghya Ghoshdastidar

This paper focuses on semi-supervised learning on graphs, and explains the above observations through the lens of Neural Tangent Kernels (NTKs).

Cannot find the paper you are looking for? You can Submit a new open access paper.