no code implementations • 13 Jun 2013 • Maksim Lapin, Matthias Hein, Bernt Schiele
Prior knowledge can be used to improve predictive performance of learning algorithms or reduce the amount of data required for training.
no code implementations • CVPR 2014 • Maksim Lapin, Bernt Schiele, Matthias Hein
The underlying idea of multitask learning is that learning tasks jointly is better than learning each task individually.
no code implementations • NeurIPS 2015 • Pratik Jawanpuria, Maksim Lapin, Matthias Hein, Bernt Schiele
The paradigm of multi-task learning is that one can achieve better generalization by learning tasks jointly and thus exploiting the similarity between the tasks rather than learning them independently of each other.
1 code implementation • NeurIPS 2015 • Maksim Lapin, Matthias Hein, Bernt Schiele
Class ambiguity is typical in image classification problems with a large number of classes.
1 code implementation • CVPR 2016 • Maksim Lapin, Matthias Hein, Bernt Schiele
In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization.
1 code implementation • 12 Dec 2016 • Maksim Lapin, Matthias Hein, Bernt Schiele
In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant.
no code implementations • 26 Apr 2022 • Mengmeng Xu, Erhan Gundogdu, Maksim Lapin, Bernard Ghanem, Michael Donoser, Loris Bazzani
Long-form video understanding requires designing approaches that are able to temporally localize activities or language.
Contrastive Learning Few Shot Temporal Action Localization +3