no code implementations • 26 Feb 2021 • Reyhan Kevser Keser, Aydin Ayanzadeh, Omid Abdollahi Aghdam, Caglar Kilcioglu, Behcet Ugur Toreyin, Nazim Kemal Ure
One of the most efficient methods for model compression is hint distillation, where the student model is injected with information (hints) from several different layers of the teacher model.
no code implementations • 23 Jul 2019 • Omid Abdollahi Aghdam, Behzad Bozorgtabar, Hazim Kemal Ekenel, Jean-Philippe Thiran
By leveraging this information, we have utilized deep face models trained on MS-Celeb-1M and fine-tuned on VGGFace2 dataset and achieved state-of-the-art accuracies on the SCFace and ICB-RW benchmarks, even without using any training data from the datasets of these benchmarks.