Search Results for author: Andrei Atanov

Found 11 papers, 5 papers with code

Controlled Training Data Generation with Diffusion Models

no code implementations22 Mar 2024 Teresa Yeo, Andrei Atanov, Harold Benoit, Aleksandr Alekseev, Ruchira Ray, Pooya Esmaeil Akhoondi, Amir Zamir

In this work, we present a method to control a text-to-image generative model to produce training data specifically "useful" for supervised learning.

Language Modelling

Unraveling the Key Components of OOD Generalization via Diversification

no code implementations26 Dec 2023 Harold Benoit, Liangze Jiang, Andrei Atanov, Oğuzhan Fatih Kar, Mattia Rigotti, Amir Zamir

We show that (1) diversification methods are highly sensitive to the distribution of the unlabeled data used for diversification and can underperform significantly when away from a method-specific sweet spot.

Task Discovery: Finding the Tasks that Neural Networks Generalize on

no code implementations1 Dec 2022 Andrei Atanov, Andrei Filatov, Teresa Yeo, Ajay Sohmshetty, Amir Zamir

An intriguing question would be: what if, instead of fixing the task and searching in the model space, we fix the model and search in the task space?

MultiMAE: Multi-modal Multi-task Masked Autoencoders

1 code implementation4 Apr 2022 Roman Bachmann, David Mizrahi, Andrei Atanov, Amir Zamir

We show this pre-training strategy leads to a flexible, simple, and efficient framework with improved transfer results to downstream tasks.

Depth Estimation Image Classification +1

3D Common Corruptions and Data Augmentation

1 code implementation CVPR 2022 Oğuzhan Fatih Kar, Teresa Yeo, Andrei Atanov, Amir Zamir

We introduce a set of image transformations that can be used as corruptions to evaluate the robustness of models as well as data augmentation mechanisms for training neural networks.

Benchmarking Data Augmentation

Simple Control Baselines for Evaluating Transfer Learning

no code implementations7 Feb 2022 Andrei Atanov, Shijian Xu, Onur Beker, Andrei Filatov, Amir Zamir

Transfer learning has witnessed remarkable progress in recent years, for example, with the introduction of augmentation-based contrastive self-supervised learning methods.

Image Classification Self-Supervised Learning +1

Measuring the Effectiveness of Self-Supervised Learning using Calibrated Learning Curves

no code implementations29 Sep 2021 Andrei Atanov, Shijian Xu, Onur Beker, Andrey Filatov, Amir Zamir

Self-supervised learning has witnessed remarkable progress in recent years, in particular with the introduction of augmentation-based contrastive methods.

Image Classification Self-Supervised Learning +1

Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations

no code implementations15 Jun 2021 Arsenii Ashukha, Andrei Atanov, Dmitry Vetrov

Averaging predictions over a set of models -- an ensemble -- is widely used to improve predictive performance and uncertainty estimation of deep learning models.

Data Augmentation Image Retrieval +2

The Deep Weight Prior

2 code implementations ICLR 2019 Andrei Atanov, Arsenii Ashukha, Kirill Struminsky, Dmitry Vetrov, Max Welling

Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution.

Bayesian Inference Variational Inference

Uncertainty Estimation via Stochastic Batch Normalization

1 code implementation13 Feb 2018 Andrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, Dmitry Vetrov

In this work, we investigate Batch Normalization technique and propose its probabilistic interpretation.

Cannot find the paper you are looking for? You can Submit a new open access paper.