no code implementations • 30 Dec 2024 • Mohamed Djilani, Salah Ghamizi, Maxime Cordy
Although adversarial robustness has been extensively studied in white-box settings, recent advances in black-box attacks (including transfer- and query-based approaches) are primarily benchmarked against weak defenses, leaving a significant gap in the evaluation of their effectiveness against more recent and moderate robust models (e. g., those featured in the Robustbench leaderboard).
1 code implementation • 14 Aug 2024 • Thibault Simonetto, Salah Ghamizi, Maxime Cordy
In addition to our open benchmark (https://github. com/serval-uni-lu/tabularbench) where we welcome submissions of new models and defenses, we implement 7 robustification mechanisms inspired by state-of-the-art defenses in computer vision and propose the largest benchmark of robust tabular deep learning over 200 models across five critical scenarios in finance, healthcare and security.
1 code implementation • 17 Jul 2024 • Salah Ghamizi, Aleksandar Bojchevski, Aoxiang Ma, Jun Cao
We provide at https://github. com/yamizi/SafePowerGraph our open-source repository, a comprehensive leaderboard, a dataset and model zoo and expect our framework to standardize and advance research in the critical field of GNN for power systems.
no code implementations • 20 Jun 2024 • Pantelis Dogoulis, Matthieu Jimenez, Salah Ghamizi, Maxime Cordy, Yves Le Traon
This paper analyzes the robustness of state-of-the-art AI-based models for power grid operations under the $N-1$ security criterion.
1 code implementation • 2 Jun 2024 • Thibault Simonetto, Salah Ghamizi, Maxime Cordy
State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings.
no code implementations • 1 Mar 2024 • Salah Ghamizi, Jun Cao, Aoxiang Ma, Pedro Rodriguez
PowerFlowMultiNet outperforms traditional methods and other deep learning approaches in terms of accuracy and computational speed.
no code implementations • 8 Nov 2023 • Thibault Simonetto, Salah Ghamizi, Antoine Desjardins, Maxime Cordy, Yves Le Traon
State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings.
no code implementations • 11 Sep 2023 • Salah Ghamizi, Maxime Cordy, Yuejun Guo, Mike Papadakis, And Yves Le Traon
To this end, we survey the related literature and identify 10 commonly adopted empirical evaluation hazards that may significantly impact experimental results.
no code implementations • 24 May 2023 • Salijona Dyrmishi, Salah Ghamizi, Maxime Cordy
Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversarial attacks -- malicious algorithms that imperceptibly modify input text to force models into making incorrect predictions.
1 code implementation • 6 Feb 2023 • Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon
While leveraging additional training data is well established to improve adversarial robustness, it incurs the unavoidable cost of data collection and the heavy computation to train models.
no code implementations • 15 Dec 2022 • Salah Ghamizi, Maxime Cordy, Michail Papadakis, Yves Le Traon
Vulnerability to adversarial attacks is a well-known weakness of Deep Neural Networks.
1 code implementation • 7 Feb 2022 • Salijona Dyrmishi, Salah Ghamizi, Thibault Simonetto, Yves Le Traon, Maxime Cordy
While the literature on security attacks and defense of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern about the under-explored field of realistic adversarial attacks and their implications on the robustness of real-world systems.
1 code implementation • 2 Dec 2021 • Thibault Simonetto, Salijona Dyrmishi, Salah Ghamizi, Maxime Cordy, Yves Le Traon
We propose a unified framework to generate feasible adversarial examples that satisfy given domain constraints.
1 code implementation • 26 Oct 2021 • Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon
Vulnerability to adversarial attacks is a well-known weakness of Deep Neural networks.
no code implementations • 14 Nov 2019 • Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon
The key idea of our method is to use deep neural networks for image classification and adversarial attacks to embed secret information within images.
1 code implementation • 9 Apr 2019 • Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon
First, we model the variability of DNN architectures with a Feature Model (FM) that generalizes over existing architectures.