no code implementations • 16 Jan 2024 • Mikhail Pautov, Nikita Bogdanov, Stanislav Pyatkin, Oleg Rogov, Ivan Oseledets
As deep learning (DL) models are widely and effectively used in Machine Learning as a Service (MLaaS) platforms, there is a rapidly growing interest in DL watermarking techniques that can be used to confirm the ownership of a particular model.
no code implementations • 17 Aug 2023 • Dmitrii Korzh, Mikhail Pautov, Olga Tsymboi, Ivan Oseledets
Randomized smoothing is the state-of-the-art approach to construct image classifiers that are provably robust against additive adversarial perturbations of bounded magnitude.
1 code implementation • 20 Mar 2023 • Andrei Chertkov, Olga Tsymboi, Mikhail Pautov, Ivan Oseledets
Neural networks are deployed widely in natural language processing tasks on the industrial scale, and perhaps the most often they are used as compounds of automatic machine translation systems.
1 code implementation • 2 Feb 2022 • Mikhail Pautov, Olesya Kuznetsova, Nurislam Tursynbek, Aleksandr Petiushko, Ivan Oseledets
In this work, we extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings.
1 code implementation • 22 Sep 2021 • Mikhail Pautov, Nurislam Tursynbek, Marina Munkhoeva, Nikita Muravev, Aleksandr Petiushko, Ivan Oseledets
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks -- small modifications of the input that change the predictions.
no code implementations • 15 Oct 2019 • Mikhail Pautov, Grigorii Melnikov, Edgar Kaziakhmedov, Klim Kireev, Aleksandr Petiushko
We examine security of one of the best public face recognition systems, LResNet100E-IR with ArcFace loss, and propose a simple method to attack it in the physical world.
5 code implementations • 14 Oct 2019 • Edgar Kaziakhmedov, Klim Kireev, Grigorii Melnikov, Mikhail Pautov, Aleksandr Petiushko
Recent studies proved that deep learning approaches achieve remarkable results on face detection task.