no code implementations • 30 Aug 2021 • Rick Salay, Krzysztof Czarnecki, Hiroshi Kuwajima, Hirotoshi Yasuoka, Toshihiro Nakae, Vahdat Abdelzad, Chengjie Huang, Maximilian Kahn, Van Duong Nguyen
In this paper, we propose the Integration Safety Case for Perception (ISCaP), a generic template for such a linking safety argument specifically tailored for perception components.
no code implementations • 1 Apr 2019 • Hiroshi Kuwajima, Hirotoshi Yasuoka, Toshihiro Nakae
The key to using a machine learning model in a deductively engineered system is decomposing the data-driven training of machine learning models into requirement, design, and verification, particularly for machine learning models used in safety-critical systems.
no code implementations • 7 Dec 2018 • Hiroshi Kuwajima, Hirotoshi Yasuoka, Toshihiro Nakae
To establish standard quality assurance frameworks, it is necessary to visualize and organize these open problems in an interdisciplinary way, so that the experts from many different technical fields may discuss these problems in depth and develop solutions.
no code implementations • 18 Sep 2018 • Chih-Hong Cheng, Georg Nührenberg, Hirotoshi Yasuoka
For using neural networks in safety critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in training.
no code implementations • 6 Jun 2018 • Chih-Hong Cheng, Georg Nührenberg, Chung-Hao Huang, Harald Ruess, Hirotoshi Yasuoka
Artificial neural networks (NN) are instrumental in realizing highly-automated driving functionality.
no code implementations • 11 May 2018 • Chih-Hong Cheng, Chung-Hao Huang, Hirotoshi Yasuoka
Systematically testing models learned from neural networks remains a crucial unsolved barrier to successfully justify safety for autonomous vehicles engineered using data-driven approach.