1 code implementation • 9 Oct 2021 • Javad Zolfaghari Bengar, Joost Van de Weijer, Laura Lopez Fuentes, Bogdan Raducanu
Results on three datasets showed that the method is general (it can be combined with most existing active learning algorithms) and can be effectively applied to boost the performance of both informative and representative-based active learning methods.
no code implementations • 25 Aug 2021 • Javad Zolfaghari Bengar, Joost Van de Weijer, Bartlomiej Twardowski, Bogdan Raducanu
Our experiments reveal that self-training is remarkably more efficient than active learning at reducing the labeling effort, that for a low labeling budget, active learning offers no benefit to self-training, and finally that the combination of active learning and self-training is fruitful when the labeling budget is high.
no code implementations • 30 Jul 2021 • Javad Zolfaghari Bengar, Bogdan Raducanu, Joost Van de Weijer
Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples.
no code implementations • 30 Aug 2019 • Javad Zolfaghari Bengar, Abel Gonzalez-Garcia, Gabriel Villalonga, Bogdan Raducanu, Hamed H. Aghdam, Mikhail Mozerov, Antonio M. Lopez, Joost Van de Weijer
Our active learning criterion is based on the estimated number of errors in terms of false positives and false negatives.