no code implementations • 22 Mar 2024 • Teresa Yeo, Andrei Atanov, Harold Benoit, Aleksandr Alekseev, Ruchira Ray, Pooya Esmaeil Akhoondi, Amir Zamir
In this work, we present a method to control a text-to-image generative model to produce training data specifically "useful" for supervised learning.
no code implementations • 26 Dec 2023 • Harold Benoit, Liangze Jiang, Andrei Atanov, Oğuzhan Fatih Kar, Mattia Rigotti, Amir Zamir
We show that (1) diversification methods are highly sensitive to the distribution of the unlabeled data used for diversification and can underperform significantly when away from a method-specific sweet spot.
no code implementations • 1 Dec 2022 • Andrei Atanov, Andrei Filatov, Teresa Yeo, Ajay Sohmshetty, Amir Zamir
An intriguing question would be: what if, instead of fixing the task and searching in the model space, we fix the model and search in the task space?
1 code implementation • 4 Apr 2022 • Roman Bachmann, David Mizrahi, Andrei Atanov, Amir Zamir
We show this pre-training strategy leads to a flexible, simple, and efficient framework with improved transfer results to downstream tasks.
Ranked #1 on Semantic Segmentation on Hypersim
1 code implementation • CVPR 2022 • Oğuzhan Fatih Kar, Teresa Yeo, Andrei Atanov, Amir Zamir
We introduce a set of image transformations that can be used as corruptions to evaluate the robustness of models as well as data augmentation mechanisms for training neural networks.
no code implementations • 7 Feb 2022 • Andrei Atanov, Shijian Xu, Onur Beker, Andrei Filatov, Amir Zamir
Transfer learning has witnessed remarkable progress in recent years, for example, with the introduction of augmentation-based contrastive self-supervised learning methods.
no code implementations • 29 Sep 2021 • Andrei Atanov, Shijian Xu, Onur Beker, Andrey Filatov, Amir Zamir
Self-supervised learning has witnessed remarkable progress in recent years, in particular with the introduction of augmentation-based contrastive methods.
no code implementations • 15 Jun 2021 • Arsenii Ashukha, Andrei Atanov, Dmitry Vetrov
Averaging predictions over a set of models -- an ensemble -- is widely used to improve predictive performance and uncertainty estimation of deep learning models.
3 code implementations • 1 May 2019 • Andrei Atanov, Alexandra Volokhova, Arsenii Ashukha, Ivan Sosnovik, Dmitry Vetrov
This paper proposes a semi-conditional normalizing flow model for semi-supervised learning.
2 code implementations • ICLR 2019 • Andrei Atanov, Arsenii Ashukha, Kirill Struminsky, Dmitry Vetrov, Max Welling
Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution.
1 code implementation • 13 Feb 2018 • Andrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, Dmitry Vetrov
In this work, we investigate Batch Normalization technique and propose its probabilistic interpretation.