1 code implementation • 29 Mar 2024 • Ahmed Agiza, Marina Neseem, Sherief Reda
Adapting models pre-trained on large-scale datasets to a variety of downstream tasks is a common strategy in deep learning.
1 code implementation • 17 Apr 2023 • Marina Neseem, Ahmed Agiza, Sherief Reda
Specifically, we attach a task-aware lightweight policy network to the shared encoder and co-train it alongside the MTL model to recognize unnecessary computations.
1 code implementation • 29 Oct 2021 • Abdelrahman Hosny, Marina Neseem, Sherief Reda
However, memory footprint from activations is the main bottleneck for training on the edge.
1 code implementation • 16 Aug 2021 • Marina Neseem, Sherief Reda
In particular, our technique clusters the object categories based on their spatial co-occurrence probability.
no code implementations • 10 Jun 2020 • Marina Neseem, Jon Nelson, Sherief Reda
The proposed techniques reduce the power consumption by dynamically switching among different sensor configurations as a function of the user activity.