Search Results for author: Nicolas Weber

Found 5 papers, 1 papers with code

Detail-Preserving Pooling in Deep Networks

2 code implementations CVPR 2018 Faraz Saeedan, Nicolas Weber, Michael Goesele, Stefan Roth

This is commonly referred to as pooling, and is applied to reduce the number of parameters, improve invariance to certain distortions, and increase the receptive field size.

BrainSlug: Transparent Acceleration of Deep Learning Through Depth-First Parallelism

no code implementations23 Apr 2018 Nicolas Weber, Florian Schmidt, Mathias Niepert, Felipe Huici

Neural network frameworks such as PyTorch and TensorFlow are the workhorses of numerous machine learning applications ranging from object recognition to machine translation.

Machine Translation Object Recognition +1

Towards Transparent Neural Network Acceleration

no code implementations19 Oct 2018 Nicolas Weber, Mathias Niepert, Felipe Huici

While the efficiency problem can be partially addressed with specialized hardware and its corresponding proprietary libraries, we believe that neural network acceleration should be transparent to the user and should support all hardware platforms and deep learning libraries.

object-detection Object Detection

SOL: Effortless Device Support for AI Frameworks without Source Code Changes

no code implementations24 Mar 2020 Nicolas Weber, Felipe Huici

In this paper we explore how to provide hardware support in AI frameworks without changing the framework's source code in order to minimize maintenance overhead.

SOL: Reducing the Maintenance Overhead for Integrating Hardware Support into AI Frameworks

no code implementations19 May 2022 Nicolas Weber

While mainstream CPUs and GPUs have the "luxury" to have a wide spread user base in the open source community, less mainstream CPU, GPU or accelerator vendors need to put in a high effort to get their hardware supported by these frameworks.

Cannot find the paper you are looking for? You can Submit a new open access paper.