Search Results for author: Mikhail Pautov

Found 7 papers, 4 papers with code

Probabilistically Robust Watermarking of Neural Networks

no code implementations16 Jan 2024 Mikhail Pautov, Nikita Bogdanov, Stanislav Pyatkin, Oleg Rogov, Ivan Oseledets

As deep learning (DL) models are widely and effectively used in Machine Learning as a Service (MLaaS) platforms, there is a rapidly growing interest in DL watermarking techniques that can be used to confirm the ownership of a particular model.

General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing

no code implementations17 Aug 2023 Dmitrii Korzh, Mikhail Pautov, Olga Tsymboi, Ivan Oseledets

Randomized smoothing is the state-of-the-art approach to construct image classifiers that are provably robust against additive adversarial perturbations of bounded magnitude.

Translation

Translate your gibberish: black-box adversarial attack on machine translation systems

1 code implementation20 Mar 2023 Andrei Chertkov, Olga Tsymboi, Mikhail Pautov, Ivan Oseledets

Neural networks are deployed widely in natural language processing tasks on the industrial scale, and perhaps the most often they are used as compounds of automatic machine translation systems.

Adversarial Attack Machine Translation +1

Smoothed Embeddings for Certified Few-Shot Learning

1 code implementation2 Feb 2022 Mikhail Pautov, Olesya Kuznetsova, Nurislam Tursynbek, Aleksandr Petiushko, Ivan Oseledets

In this work, we extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings.

Adversarial Robustness Few-Shot Learning

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

1 code implementation22 Sep 2021 Mikhail Pautov, Nurislam Tursynbek, Marina Munkhoeva, Nikita Muravev, Aleksandr Petiushko, Ivan Oseledets

In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks -- small modifications of the input that change the predictions.

Adversarial Robustness

On adversarial patches: real-world attack on ArcFace-100 face recognition system

no code implementations15 Oct 2019 Mikhail Pautov, Grigorii Melnikov, Edgar Kaziakhmedov, Klim Kireev, Aleksandr Petiushko

We examine security of one of the best public face recognition systems, LResNet100E-IR with ArcFace loss, and propose a simple method to attack it in the physical world.

Attribute Face Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.