no code implementations • 28 Sep 2024 • Harald Ruess
We argue that relative importance and its equitable attribution in terms of Shapley-Owen effects is an appropriate one, and, if we accept a small number of reasonable imperatives for equitable attribution, the only way to measure fairness.
1 code implementation • 25 Apr 2024 • Chih-Hong Cheng, Harald Ruess, Changshun Wu, Xingyu Zhao
The deployment of generative AI (GenAI) models raises significant fairness concerns, addressed in this paper through novel characterization and enforcement techniques specific to GenAI.
no code implementations • 24 Jul 2023 • Chih-Hong Cheng, Harald Ruess, Konstantinos Theodorou
The reshaped test set reflects the distribution of neuron activation values as observed during operation, and may therefore be used for re-evaluating safety performance in the presence of covariate shift.
no code implementations • 14 Jun 2023 • Chih-Hong Cheng, Changshun Wu, Harald Ruess, Saddek Bensalem
Out-of-distribution (OoD) detection techniques are instrumental for safety-related neural networks.
no code implementations • 20 Mar 2023 • Harald Ruess, Natarajan Shankar
The key ideas underlying Cyberlogic are extremely simple, as (1) public keys correspond to authorizations, (2) transactions are specified as distributed logic programs, and (3) verifiable evidence is collected by means of distributed proof search.
no code implementations • 6 Jun 2018 • Chih-Hong Cheng, Georg Nührenberg, Chung-Hao Huang, Harald Ruess, Hirotoshi Yasuoka
Artificial neural networks (NN) are instrumental in realizing highly-automated driving functionality.
no code implementations • 9 Oct 2017 • Chih-Hong Cheng, Georg Nührenberg, Chung-Hao Huang, Harald Ruess
We study the problem of formal verification of Binarized Neural Networks (BNN), which have recently been proposed as a energy-efficient alternative to traditional learning networks.
no code implementations • 4 Sep 2017 • Chih-Hong Cheng, Frederik Diehl, Yassine Hamza, Gereon Hinz, Georg Nührenberg, Markus Rickert, Harald Ruess, Michael Troung-Le
We propose a methodology for designing dependable Artificial Neural Networks (ANN) by extending the concepts of understandability, correctness, and validity that are crucial ingredients in existing certification standards.
no code implementations • 28 Apr 2017 • Chih-Hong Cheng, Georg Nührenberg, Harald Ruess
The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges.