no code implementations • 7 Nov 2023 • Nilotpal Sinha, Abd El Rahman Shabayek, Anis Kacem, Peyman Rostami, Carl Shneider, Djamila Aouada
Our approach re-frames the neural architecture search problem as finding an architecture with performance similar to that of a reference model for a target hardware, while adhering to a cost constraint for that hardware.
Hardware Aware Neural Architecture Search Neural Architecture Search
no code implementations • 19 Jul 2023 • Carl Shneider, Peyman Rostami, Anis Kacem, Nilotpal Sinha, Abd El Rahman Shabayek, Djamila Aouada
Deploying deep learning neural networks on edge devices, to accomplish task specific objectives in the real-world, requires a reduction in their memory footprint, power consumption, and latency.
1 code implementation • 1 Apr 2022 • Nilotpal Sinha, Kuan-Wen Chen
This can be reduced by using a supernet for estimating the fitness of an architecture due to weight sharing among all architectures in the search space.
1 code implementation • 3 Mar 2022 • Nilotpal Sinha, Kuan-Wen Chen
This can be reduced by using a supernet to estimate the fitness of every architecture in the search space due to its weight sharing nature.
no code implementations • 15 Jul 2021 • Nilotpal Sinha, Kuan-Wen Chen
Evolution-based neural architecture search requires high computational resources, resulting in long search time.
1 code implementation • 23 Dec 2020 • Nilotpal Sinha, Kuan-Wen Chen
The architectures are represented by using the architecture parameter of the one shot model which results in the weight sharing among the architectures for a given population of architectures and also weight inheritance from one generation to the next generation of architectures.
Ranked #103 on Neural Architecture Search on ImageNet