Search Results for author: Ibtihel Amara

Found 4 papers, 0 papers with code

Dynamic Corrective Self-Distillation for Better Fine-Tuning of Pretrained Models

no code implementations12 Dec 2023 Ibtihel Amara, Vinija Jain, Aman Chadha

We tackle the challenging issue of aggressive fine-tuning encountered during the process of transfer learning of pre-trained language models (PLMs) with limited labeled downstream data.

Transfer Learning

BD-KD: Balancing the Divergences for Online Knowledge Distillation

no code implementations25 Dec 2022 Ibtihel Amara, Nazanin Sepahvand, Brett H. Meyer, Warren J. Gross, James J. Clark

We show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network without limiting the teacher network's learning process.

Knowledge Distillation Model Compression +1

CES-KD: Curriculum-based Expert Selection for Guided Knowledge Distillation

no code implementations15 Sep 2022 Ibtihel Amara, Maryam Ziaeefard, Brett H. Meyer, Warren Gross, James J. Clark

Knowledge distillation (KD) is an effective tool for compressing deep classification models for edge devices.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.