no code implementations • 13 Jun 2023 • Ahmet Caner Yüzügüler, Nikolaos Dimitriadis, Pascal Frossard
Finding optimal channel dimensions (i. e., the number of filters in DNN layers) is essential to design DNNs that perform well under computational resource constraints.
no code implementations • 4 Mar 2023 • Yamin Sepehri, Pedram Pad, Ahmet Caner Yüzügüler, Pascal Frossard, L. Andrea Dunbar
In this study, a novel hierarchical training method for deep neural networks is proposed that uses early exits in a divided architecture between edge and cloud workers to reduce the communication cost, training runtime and privacy concerns.
1 code implementation • 23 Mar 2022 • Ahmet Caner Yüzügüler, Nikolaos Dimitriadis, Pascal Frossard
Optimizing resource utilization in target platforms is key to achieving high performance during DNN inference.
Hardware Aware Neural Architecture Search Image Classification +1
1 code implementation • 22 Mar 2022 • Ahmet Caner Yüzügüler, Canberk Sönmez, Mario Drumond, Yunho Oh, Babak Falsafi, Pascal Frossard
In this work, we study three key pillars in multi-pod systolic array designs, namely array granularity, interconnect, and tiling.