Search Results for author: Alexander Bastounis

Found 5 papers, 0 papers with code

When can you trust feature selection? -- I: A condition-based analysis of LASSO and generalised hardness of approximation

no code implementations18 Dec 2023 Alexander Bastounis, Felipe Cucker, Anders C. Hansen

However, we define a LASSO condition number and design an efficient algorithm for computing these support sets provided the input data is well-posed (has finite condition number) in time polynomial in the dimensions and logarithm of the condition number.

feature selection

The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning

no code implementations13 Sep 2023 Alexander Bastounis, Alexander N. Gorban, Anders C. Hansen, Desmond J. Higham, Danil Prokhorov, Oliver Sutton, Ivan Y. Tyukin, Qinghua Zhou

We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation.

How adversarial attacks can disrupt seemingly stable accurate classifiers

no code implementations7 Sep 2023 Oliver J. Sutton, Qinghua Zhou, Ivan Y. Tyukin, Alexander N. Gorban, Alexander Bastounis, Desmond J. Higham

We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability -- notably the simultaneous susceptibility of the (otherwise accurate) model to easily constructed adversarial attacks, and robustness to random perturbations of the input data.

Image Classification

The mathematics of adversarial attacks in AI -- Why deep learning is unstable despite the existence of stable neural networks

no code implementations13 Sep 2021 Alexander Bastounis, Anders C Hansen, Verner Vlačić

Our paper addresses why there has been no solution to the problem, as we prove the following mathematical paradox: any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate) -- despite the provable existence of both accurate and stable neural networks for the same classification problems.

The Feasibility and Inevitability of Stealth Attacks

no code implementations26 Jun 2021 Ivan Y. Tyukin, Desmond J. Higham, Alexander Bastounis, Eliyas Woldegeorgis, Alexander N. Gorban

Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team.

Cannot find the paper you are looking for? You can Submit a new open access paper.