Search Results for author: Harald Ruess

Found 8 papers, 0 papers with code

Formal Specification, Assessment, and Enforcement of Fairness for Generative AIs

no code implementations25 Apr 2024 Chih-Hong Cheng, Changshun Wu, Harald Ruess, Xingyu Zhao, Saddek Bensalem

The risk of reinforcing or exacerbating societal biases and inequalities is growing as generative AI increasingly produces content that resembles human output, from text to images and beyond.

Safety Performance of Neural Networks in the Presence of Covariate Shift

no code implementations24 Jul 2023 Chih-Hong Cheng, Harald Ruess, Konstantinos Theodorou

The reshaped test set reflects the distribution of neuron activation values as observed during operation, and may therefore be used for re-evaluating safety performance in the presence of covariate shift.

Towards Rigorous Design of OoD Detectors

no code implementations14 Jun 2023 Chih-Hong Cheng, Changshun Wu, Harald Ruess, Saddek Bensalem

Out-of-distribution (OoD) detection techniques are instrumental for safety-related neural networks.

Out of Distribution (OOD) Detection

Evidential Transactions with Cyberlogic

no code implementations20 Mar 2023 Harald Ruess, Natarajan Shankar

The key ideas underlying Cyberlogic are extremely simple, as (1) public keys correspond to authorizations, (2) transactions are specified as distributed logic programs, and (3) verifiable evidence is collected by means of distributed proof search.

Towards Dependability Metrics for Neural Networks

no code implementations6 Jun 2018 Chih-Hong Cheng, Georg Nührenberg, Chung-Hao Huang, Harald Ruess, Hirotoshi Yasuoka

Artificial neural networks (NN) are instrumental in realizing highly-automated driving functionality.

Verification of Binarized Neural Networks via Inter-Neuron Factoring

no code implementations9 Oct 2017 Chih-Hong Cheng, Georg Nührenberg, Chung-Hao Huang, Harald Ruess

We study the problem of formal verification of Binarized Neural Networks (BNN), which have recently been proposed as a energy-efficient alternative to traditional learning networks.

Neural Networks for Safety-Critical Applications - Challenges, Experiments and Perspectives

no code implementations4 Sep 2017 Chih-Hong Cheng, Frederik Diehl, Yassine Hamza, Gereon Hinz, Georg Nührenberg, Markus Rickert, Harald Ruess, Michael Troung-Le

We propose a methodology for designing dependable Artificial Neural Networks (ANN) by extending the concepts of understandability, correctness, and validity that are crucial ingredients in existing certification standards.

Maximum Resilience of Artificial Neural Networks

no code implementations28 Apr 2017 Chih-Hong Cheng, Georg Nührenberg, Harald Ruess

The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges.

Cannot find the paper you are looking for? You can Submit a new open access paper.