Search Results for author: Michael Paulitsch

Found 12 papers, 3 papers with code

Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation

no code implementations28 Mar 2024 Yujin Chen, Yinyu Nie, Benjamin Ummenhofer, Reiner Birkl, Michael Paulitsch, Matthias Müller, Matthias Nießner

In Mesh2NeRF, we propose an analytic solution to directly obtain ground-truth radiance fields from 3D meshes, characterizing the density field with an occupancy function featuring a defined surface thickness, and determining view-dependent color through a reflection function considering both the mesh and environment lighting.

3D Generation

Fully-fused Multi-Layer Perceptrons on Intel Data Center GPUs

1 code implementation26 Mar 2024 Kai Yuan, Christoph Bauinger, Xiangyi Zhang, Pascal Baehr, Matthias Kirchhart, Darius Dabert, Adrien Tousnakhoff, Pierre Boudier, Michael Paulitsch

We compare our approach to a similar CUDA implementation for MLPs and show that our implementation on the Intel Data Center GPU outperforms the CUDA implementation on Nvidia's H100 GPU by a factor up to 2. 84 in inference and 1. 75 in training.

Image Compression Physics-informed machine learning

LDM3D-VR: Latent Diffusion Model for 3D VR

no code implementations6 Nov 2023 Gabriela Ben Melech Stan, Diana Wofk, Estelle Aflalo, Shao-Yen Tseng, Zhipeng Cai, Michael Paulitsch, Vasudev Lal

Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions.

A Low-cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks

no code implementations31 Oct 2023 Florian Geissler, Syed Qutub, Michael Paulitsch, Karthik Pattabiraman

We present a highly compact run-time monitoring approach for deep computer vision networks that extracts selected knowledge from only a few (down to merely two) hidden layers, yet can efficiently detect silent data corruption originating from both hardware memory and input faults.

Anomaly Detection

Large-Scale Application of Fault Injection into PyTorch Models -- an Extension to PyTorchFI for Validation Efficiency

1 code implementation30 Oct 2023 Ralf Graafe, Qutub Syed Sha, Florian Geissler, Michael Paulitsch

Transient or permanent faults in hardware can render the output of Neural Networks (NN) incorrect without user-specific traces of the error, i. e. silent data errors (SDE).

BEA: Revisiting anchor-based object detection DNN using Budding Ensemble Architecture

no code implementations14 Sep 2023 Syed Sha Qutub, Neslihan Kose, Rafael Rosales, Michael Paulitsch, Korbinian Hagn, Florian Geissler, Yang Peng, Gereon Hinz, Alois Knoll

The proposed loss functions in BEA improve the confidence score calibration and lower the uncertainty error, which results in a better distinction of true and false positives and, eventually, higher accuracy of the object detection models.

object-detection Object Detection +1

Exploring Resiliency to Natural Image Corruptions in Deep Learning using Design Diversity

no code implementations15 Mar 2023 Rafael Rosales, Pablo Munoz, Michael Paulitsch

We compare ensembles created with diverse model architectures trained either independently or through a Neural Architecture Search technique and evaluate the correlation of prediction-based and attribution-based diversity to the final ensemble accuracy.

Neural Architecture Search

Evaluation of Confidence-based Ensembling in Deep Learning Image Classification

no code implementations3 Mar 2023 Rafael Rosales, Peter Popov, Michael Paulitsch

The key idea is to create successive model experts for samples that were difficult (not necessarily incorrectly classified) by the preceding model.

Binary Classification Classification +2

Reliable Multimodal Trajectory Prediction via Error Aligned Uncertainty Optimization

no code implementations9 Dec 2022 Neslihan Kose, Ranganath Krishnan, Akash Dhamasia, Omesh Tickoo, Michael Paulitsch

Reliable uncertainty quantification in deep neural networks is very crucial in safety-critical applications such as automated driving for trustworthy and informed decision-making.

Decision Making motion prediction +3

Hardware faults that matter: Understanding and Estimating the safety impact of hardware faults on object detection DNNs

1 code implementation7 Sep 2022 Syed Qutub, Florian Geissler, Yang Peng, Ralf Grafe, Michael Paulitsch, Gereon Hinz, Alois Knoll

The evaluation of several representative object detection models shows that even a single bit flip can lead to a severe silent data corruption event with potentially critical safety implications, with e. g., up to (much greater than) 100 FPs generated, or up to approx.

Object object-detection +1

Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision

no code implementations16 Aug 2021 Florian Geissler, Syed Qutub, Sayanta Roychowdhury, Ali Asgari, Yang Peng, Akash Dhamasia, Ralf Graefe, Karthik Pattabiraman, Michael Paulitsch

Convolutional neural networks (CNNs) have become an established part of numerous safety-critical computer vision applications, including human robot interactions and automated driving.

A Plausibility-based Fault Detection Method for High-level Fusion Perception Systems

no code implementations30 Sep 2020 Florian Geissler, Alex Unnervik, Michael Paulitsch

Trustworthy environment perception is the fundamental basis for the safe deployment of automated agents such as self-driving vehicles or intelligent robots.

Fault Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.