no code implementations • 23 Feb 2024 • Zihan Zhou, Jonathan Booher, Khashayar Rohanimanesh, Wei Liu, Aleksandr Petiushko, Animesh Garg
Safe reinforcement learning tasks with multiple constraints are a challenging domain despite being very common in the real world.
1 code implementation • 7 Feb 2022 • Nikita Kotelevskii, Aleksandr Artemenkov, Kirill Fedyanin, Fedor Noskov, Alexander Fishkov, Artem Shelmanov, Artem Vazhentsev, Aleksandr Petiushko, Maxim Panov
This paper proposes a fast and scalable method for uncertainty quantification of machine learning models' predictions.
1 code implementation • 2 Feb 2022 • Mikhail Pautov, Olesya Kuznetsova, Nurislam Tursynbek, Aleksandr Petiushko, Ivan Oseledets
In this work, we extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings.
1 code implementation • 22 Nov 2021 • Daria Bakshandaeva, Denis Dimitrov, Vladimir Arkhipkin, Alex Shonenkov, Mark Potanin, Denis Karachev, Andrey Kuznetsov, Anton Voronov, Vera Davydova, Elena Tutubalina, Aleksandr Petiushko
Supporting the current trend in the AI community, we present the AI Journey 2021 Challenge called Fusion Brain, the first competition which is targeted to make the universal architecture which could process different modalities (in this case, images, texts, and code) and solve multiple tasks for vision and language.
1 code implementation • 22 Sep 2021 • Mikhail Pautov, Nurislam Tursynbek, Marina Munkhoeva, Nikita Muravev, Aleksandr Petiushko, Ivan Oseledets
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks -- small modifications of the input that change the predictions.
no code implementations • 8 Jul 2021 • Alexander Ivanov, Gleb Nosovskiy, Alexey Chekunov, Denis Fedoseev, Vladislav Kibkalo, Mikhail Nikulin, Fedor Popelenskiy, Stepan Komkov, Ivan Mazurenko, Aleksandr Petiushko
The probabilistic method is new.
no code implementations • 28 Jun 2021 • Nikita Muravev, Aleksandr Petiushko
Currently the most popular method of providing robustness certificates is randomized smoothing where an input is smoothed via some probability distribution.
1 code implementation • 27 Jun 2021 • Anton Razzhigaev, Klim Kireev, Igor Udovichenko, Aleksandr Petiushko
Several methods for inversion of face recognition models were recently presented, attempting to reconstruct a face from deep templates.
3 code implementations • 19 Mar 2021 • Maksim Dzabraev, Maksim Kalashnikov, Stepan Komkov, Aleksandr Petiushko
We present a new state-of-the-art on the text to video retrieval task on MSRVTT and LSMDC benchmarks where our model outperforms all previous solutions by a large margin.
Ranked #25 on Video Retrieval on LSMDC (using extra training data)
1 code implementation • 11 Feb 2021 • Fedor Pavutnitskiy, Sergei O. Ivanov, Evgeny Abramov, Viacheslav Borovitskiy, Artem Klochkov, Viktor Vialov, Anatolii Zaikovskii, Aleksandr Petiushko
The knowledge that data lies close to a particular submanifold of the ambient Euclidean space may be useful in a number of ways.
no code implementations • 14 Dec 2020 • Nurislam Tursynbek, Aleksandr Petiushko, Ivan Oseledets
Differential privacy (DP) is a gold-standard concept of measuring and guaranteeing privacy in data analysis.
1 code implementation • 4 Nov 2020 • Stepan Komkov, Maksim Dzabraev, Aleksandr Petiushko
In this paper, we explore the various methods to embed the ensemble power into a single model.
Ranked #47 on Action Recognition on Something-Something V2 (using extra training data)
1 code implementation • 27 Jul 2020 • Anton Razzhigaev, Klim Kireev, Edgar Kaziakhmedov, Nurislam Tursynbek, Aleksandr Petiushko
In this work, we present a novel algorithm based on an it-erative sampling of random Gaussian blobs for black-box face recovery, given only an output feature vector of deep face recognition systems.
no code implementations • 28 Jun 2020 • Nurislam Tursynbek, Aleksandr Petiushko, Ivan Oseledets
The brittleness of deep image classifiers to small adversarial input perturbations has been extensively studied in the last several years.
no code implementations • 15 Oct 2019 • Mikhail Pautov, Grigorii Melnikov, Edgar Kaziakhmedov, Klim Kireev, Aleksandr Petiushko
We examine security of one of the best public face recognition systems, LResNet100E-IR with ArcFace loss, and propose a simple method to attack it in the physical world.
5 code implementations • 14 Oct 2019 • Edgar Kaziakhmedov, Klim Kireev, Grigorii Melnikov, Mikhail Pautov, Aleksandr Petiushko
Recent studies proved that deep learning approaches achieve remarkable results on face detection task.
4 code implementations • 23 Aug 2019 • Stepan Komkov, Aleksandr Petiushko
In this paper we propose a novel easily reproducible technique to attack the best public Face ID system ArcFace in different shooting conditions.