1 code implementation • 15 Feb 2024 • Romain Ilbert, Ambroise Odonnat, Vasilii Feofanov, Aladin Virmaux, Giuseppe Paolo, Themis Palpanas, Ievgen Redko
Transformer-based architectures achieved breakthrough performance in natural language processing and computer vision, yet they remain inferior to simpler linear baselines in multivariate long-term forecasting.
no code implementations • 17 Jan 2024 • Renchunzi Xie, Ambroise Odonnat, Vasilii Feofanov, Ievgen Redko, Jianfeng Zhang, Bo An
Our key idea is that the model should be adjusted with a higher magnitude of gradients when it does not generalize to the test dataset with a distribution shift.
1 code implementation • 23 Oct 2023 • Ambroise Odonnat, Vasilii Feofanov, Ievgen Redko
Self-training is a well-known approach for semi-supervised learning.
no code implementations • 20 Oct 2023 • Vasilii Feofanov, Malik Tiomoko, Aladin Virmaux
As an application, we derive a hyperparameter selection policy that finds the best balance between the supervised and the unsupervised terms of our learning criterion.
no code implementations • 24 Feb 2022 • Massih-Reza Amini, Vasilii Feofanov, Loic Pauletto, Lies Hadjadj, Emilie Devijver, Yury Maximov
Semi-supervised algorithms aim to learn prediction functions from a small set of labeled observations and a large set of unlabeled observations.
no code implementations • 29 Sep 2021 • Vasilii Feofanov, Emilie Devijver, Massih-Reza Amini
First, we derive a transductive bound over the risk of the multi-class majority vote classifier.
no code implementations • 12 Nov 2019 • Vasilii Feofanov, Emilie Devijver, Massih-Reza Amini
In this paper, we propose a new wrapper feature selection approach with partially labeled training examples where unlabeled observations are pseudo-labeled using the predictions of an initial classifier trained on the labeled training set.