2 code implementations • CVPR 2018 • Faraz Saeedan, Nicolas Weber, Michael Goesele, Stefan Roth
This is commonly referred to as pooling, and is applied to reduce the number of parameters, improve invariance to certain distortions, and increase the receptive field size.
no code implementations • 23 Apr 2018 • Nicolas Weber, Florian Schmidt, Mathias Niepert, Felipe Huici
Neural network frameworks such as PyTorch and TensorFlow are the workhorses of numerous machine learning applications ranging from object recognition to machine translation.
no code implementations • 19 Oct 2018 • Nicolas Weber, Mathias Niepert, Felipe Huici
While the efficiency problem can be partially addressed with specialized hardware and its corresponding proprietary libraries, we believe that neural network acceleration should be transparent to the user and should support all hardware platforms and deep learning libraries.
no code implementations • 24 Mar 2020 • Nicolas Weber, Felipe Huici
In this paper we explore how to provide hardware support in AI frameworks without changing the framework's source code in order to minimize maintenance overhead.
no code implementations • 19 May 2022 • Nicolas Weber
While mainstream CPUs and GPUs have the "luxury" to have a wide spread user base in the open source community, less mainstream CPU, GPU or accelerator vendors need to put in a high effort to get their hardware supported by these frameworks.