Search Results for author: David Peer

Found 11 papers, 9 papers with code

ANLS* -- A Universal Document Processing Metric for Generative Large Language Models

1 code implementation6 Feb 2024 David Peer, Philemon Schöpf, Volckmar Nebendahl, Alexander Rietzler, Sebastian Stabinger

However, evaluating GLLMs presents a challenge as the binary true or false evaluation used for discriminative models is not applicable to the predictions made by GLLMs.

Document Classification

Affordance detection with Dynamic-Tree Capsule Networks

1 code implementation9 Nov 2022 Antonio Rodríguez-Sánchez, Simon Haller-Seeber, David Peer, Chris Engelhardt, Jakob Mittelberger, Matteo Saveriano

In the experimental evaluation we will show that our algorithm is superior to current affordance detection methods when faced with grasping previously unseen objects thanks to our Capsule Network enforcing a parts-to-whole representation.

Affordance Detection

Improving the Trainability of Deep Neural Networks through Layerwise Batch-Entropy Regularization

2 code implementations1 Aug 2022 David Peer, Bart Keulen, Sebastian Stabinger, Justus Piater, Antonio Rodríguez-Sánchez

We show empirically that we can therefore train a "vanilla" fully connected network and convolutional neural network -- no skip connections, batch normalization, dropout, or any other architectural tweak -- with 500 layers by simply adding the batch-entropy regularization term to the loss function.

Momentum Capsule Networks

1 code implementation26 Jan 2022 Josef Gugglberger, David Peer, Antonio Rodríguez-Sánchez

MoCapsNets are inspired by Momentum ResNets, a type of network that applies reversible residual building blocks.

Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing

1 code implementation31 May 2021 David Peer, Sebastian Stabinger, Stefan Engl, Antonio Rodriguez-Sanchez

Knowledge distillation maintains high performance and reaches high compression rates, nevertheless, the size of the student model is fixed after pre-training and can not be changed individually for a given downstream task and use-case to reach a desired performance/speedup ratio.

Knowledge Distillation Unsupervised Pre-training

Training Deep Capsule Networks with Residual Connections

1 code implementation15 Apr 2021 Josef Gugglberger, David Peer, Antonio Rodriguez-Sanchez

Capsule networks are a type of neural network that have recently gained increased popularity.

Auto-tuning of Deep Neural Networks by Conflicting Layer Removal

1 code implementation7 Mar 2021 David Peer, Sebastian Stabinger, Antonio Rodriguez-Sanchez

In the worst-case scenario, we prove that such a layer could lead to a network that cannot be trained at all.

Neural Architecture Search

Arguments for the Unsuitability of Convolutional Neural Networks for Non--Local Tasks

no code implementations23 Feb 2021 Sebastian Stabinger, David Peer, Antonio Rodríguez-Sánchez

Convolutional neural networks have established themselves over the past years as the state of the art method for image classification, and for many datasets, they even surpass humans in categorizing images.

Image Classification

Conflicting Bundles: Adapting Architectures Towards the Improved Training of Deep Neural Networks

1 code implementation5 Nov 2020 David Peer, Sebastian Stabinger, Antonio Rodriguez-Sanchez

In this paper, we introduce a novel theory and metric to identify layers that decrease the test accuracy of the trained models, this identification is done as early as at the beginning of training.

Limitation of capsule networks

no code implementations21 May 2019 David Peer, Sebastian Stabinger, Antonio Rodriguez-Sanchez

A recently proposed method in deep learning groups multiple neurons to capsules such that each capsule represents an object or part of an object.

Increasing the adversarial robustness and explainability of capsule networks with $γ$-capsules

1 code implementation23 Dec 2018 David Peer, Sebastian Stabinger, Antonio Rodriguez-Sanchez

In this paper we introduce a new inductive bias for capsule networks and call networks that use this prior $\gamma$-capsule networks.

Adversarial Robustness Inductive Bias

Cannot find the paper you are looking for? You can Submit a new open access paper.