1 code implementation • 3 Jun 2019 • Paweł Morawiecki, Przemysław Spurek, Marek Śmieja, Jacek Tabor
We present an efficient technique, which allows to train classification networks which are verifiably robust against norm-bounded adversarial attacks.
no code implementations • 17 Jun 2020 • Bartosz Wójcik, Paweł Morawiecki, Marek Śmieja, Tomasz Krzyżek, Przemysław Spurek, Jacek Tabor
We present a mechanism for detecting adversarial examples based on data representations taken from the hidden layers of the target network.
1 code implementation • 16 Jun 2022 • Maciej Wołczyk, Karol J. Piczak, Bartosz Wójcik, Łukasz Pustelnik, Paweł Morawiecki, Jacek Tabor, Tomasz Trzciński, Przemysław Spurek
We introduce a new training paradigm that enforces interval constraints on neural network parameter space to control forgetting.
no code implementations • 28 Jun 2022 • Paweł Morawiecki, Andrii Krutsylo, Maciej Wołczyk, Marek Śmieja
Although this setting is natural for biological systems, it proves very difficult for machine learning models such as artificial neural networks.
no code implementations • 22 Jun 2023 • Jan Dubiński, Antoni Kowalczuk, Stanisław Pawlak, Przemysław Rokita, Tomasz Trzciński, Paweł Morawiecki
In this paper, we examine whether it is possible to determine if a specific image was used in the training set, a problem known in the cybersecurity community and referred to as a membership inference attack.