no code implementations • 31 Oct 2024 • Daniel May, Alessandro Tundo, Shashikant Ilager, Ivona Brandic
While split computing enables the decomposition of large neural networks (NNs) and allows partial computation on both edge and cloud devices, identifying the most suitable split layer and hardware configurations is a non-trivial task.
no code implementations • 19 Apr 2024 • Daniel May, Matthew Taylor, Petr Musilek
This study demonstrates the effectiveness of DRL in decentralized grid management, highlighting its scalability and near-optimal performance in reducing net load variability within community-driven energy markets.
no code implementations • 6 Aug 2020 • Elizaveta Kharlova, Daniel May, Petr Musilek
The proposed model is based on two seminal concepts that led to significant performance improvements of deep learning approaches in other sequence-related fields, but not yet in the area of time series prediction: the sequence to sequence architecture and attention mechanism as a context generator.