no code implementations • 30 Nov 2024 • Mahalakshmi Sabanayagam, Lukas Gosch, Stephan Günnemann, Debarghya Ghoshdastidar
Machine learning models are highly vulnerable to label flipping, i. e., the adversarial modification (poisoning) of training labels to compromise performance.
1 code implementation • 15 Jul 2024 • Lukas Gosch, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar, Stephan Günnemann
Generalization of machine learning models can be severely compromised by data poisoning, where adversarial changes are applied to the training data.
1 code implementation • 21 Jul 2023 • Anurag Singh, Mahalakshmi Sabanayagam, Krikamol Muandet, Debarghya Ghoshdastidar
Test-time defenses are used to improve the robustness of deep neural networks to adversarial examples during inference.
no code implementations • 5 Jul 2023 • Francesco Cagnetta, Deborah Oliveira, Mahalakshmi Sabanayagam, Nikolaos Tsilivis, Julia Kempe
Lecture notes from the course given by Professor Julia Kempe at the summer school "Statistical physics of Machine Learning" in Les Houches.
1 code implementation • 12 Jun 2023 • Mahalakshmi Sabanayagam, Freya Behrens, Urte Adomaityte, Anna Dawid
Based on this finding, we provide a new and straightforward approach to studying the complexity of a high-dimensional decision boundary; show that this connection naturally inspires a new generalization measure; and finally, we develop a novel margin estimation technique which, in combination with the generalization measure, precisely identifies minima with simple wide-margin boundaries.
no code implementations • 2 Dec 2022 • Pascal Mattia Esser, Satyaki Mukherjee, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar
The central question in representation learning is what constitutes a good or meaningful representation.
1 code implementation • 18 Oct 2022 • Mahalakshmi Sabanayagam, Pascal Esser, Debarghya Ghoshdastidar
The fundamental principle of Graph Neural Networks (GNNs) is to exploit the structural information of the data by aggregating the neighboring nodes using a `graph convolution' in conjunction with a suitable choice for the network architecture, such as depth and activation functions.
no code implementations • 8 Oct 2021 • Mahalakshmi Sabanayagam, Pascal Esser, Debarghya Ghoshdastidar
This paper focuses on semi-supervised learning on graphs, and explains the above observations through the lens of Neural Tangent Kernels (NTKs).
1 code implementation • ICLR 2022 • Mahalakshmi Sabanayagam, Leena Chennuru Vankadara, Debarghya Ghoshdastidar
Using the proposed graph distance, we present two clustering algorithms and show that they achieve state-of-the-art results.