no code implementations • 15 Sep 2022 • Ibtihel Amara, Maryam Ziaeefard, Brett H. Meyer, Warren Gross, James J. Clark
Knowledge distillation (KD) is an effective tool for compressing deep classification models for edge devices.
no code implementations • 3 Aug 2022 • Danilo Vucetic, Mohammadreza Tayaranian, Maryam Ziaeefard, James J. Clark, Brett H. Meyer, Warren J. Gross
We introduce Learner modules and priming, novel methods for fine-tuning that exploit the overparameterization of pre-trained language models to gain benefits in convergence speed and resource utilization.
no code implementations • 3 May 2022 • Danilo Vucetic, Mohammadreza Tayaranian, Maryam Ziaeefard, James J. Clark, Brett H. Meyer, Warren J. Gross
FAR reduces fine-tuning time on the DistilBERT model and CoLA dataset by 30%, and time spent on memory operations by 47%.
1 code implementation • COLING 2020 • Maryam Ziaeefard, Freddy Lecue
We propose a model that captures the interactions between objects in a visual scene and entities in an external knowledge source.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Fran{\c{c}}ois Gard{\`e}res, Maryam Ziaeefard, Baptiste Abeloos, Freddy Lecue
Given an image and a question in natural language, ConceptBert requires visual elements of the image and a Knowledge Graph (KG) to infer the correct answer.