no code implementations • 11 Aug 2024 • Shishir Reddy Vutukur, Mengkejiergeli Ba, Benjamin Busam, Matthias Kayser, Gurprit Singh
In this paper, we propose a novel encoder-decoder architecture, named SABER, to learn the 6D pose of the object in the embedding space by learning shape representation at a given pose.
no code implementations • CVPR 2023 • Karim Guirguis, Johannes Meier, George Eskandar, Matthias Kayser, Bin Yang, Juergen Beyerer
Our contribution is three-fold: (1) we design a standalone lightweight generator with (2) class-wise heads (3) to generate and replay diverse instance-level base features to the RoI head while finetuning on the novel data.
Data-free Knowledge Distillation Few-Shot Object Detection +2
no code implementations • 11 Oct 2022 • Karim Guirguis, Mohamed Abdelsamad, George Eskandar, Ahmed Hendawy, Matthias Kayser, Bin Yang, Juergen Beyerer
We make the observation that the large gap in performance between two-stage and one-stage FSODs are mainly due to their weak discriminability, which is explained by a small post-fusion receptive field and a small number of foreground samples in the loss function.
Ranked #15 on Few-Shot Object Detection on MS-COCO (10-shot)
no code implementations • 11 Apr 2022 • Karim Guirguis, Ahmed Hendawy, George Eskandar, Mohamed Abdelsamad, Matthias Kayser, Juergen Beyerer
In this work, we propose a constraint-based finetuning approach (CFA) to alleviate catastrophic forgetting, while achieving competitive results on the novel task without increasing the model capacity.
Ranked #10 on Few-Shot Object Detection on MS-COCO (10-shot)
no code implementations • 11 Apr 2022 • Karim Guirguis, George Eskandar, Matthias Kayser, Bin Yang, Juergen Beyerer
First, we leverage a meta-training paradigm, where we learn the domain shift on the base classes, then transfer the domain knowledge to the novel classes.