no code implementations • 15 Dec 2023 • Krzysztof Czarnecki, Hiroshi Kuwajima
The Safety of the Intended Functionality (SOTIF) standard emerges as a promising framework for addressing these concerns, focusing on scenario-based analysis to identify hazardous behaviors and their causes.
no code implementations • 30 Aug 2021 • Rick Salay, Krzysztof Czarnecki, Hiroshi Kuwajima, Hirotoshi Yasuoka, Toshihiro Nakae, Vahdat Abdelzad, Chengjie Huang, Maximilian Kahn, Van Duong Nguyen
In this paper, we propose the Integration Safety Case for Perception (ISCaP), a generic template for such a linking safety argument specifically tailored for perception components.
no code implementations • 31 Jul 2019 • Hiroshi Kuwajima, Fuyuki Ishikawa
We thus provide holistic insights for quality of AI systems by incorporating the ML nature and AI ethics to the traditional software quality concepts.
no code implementations • 1 Apr 2019 • Hiroshi Kuwajima, Hirotoshi Yasuoka, Toshihiro Nakae
The key to using a machine learning model in a deductively engineered system is decomposing the data-driven training of machine learning models into requirement, design, and verification, particularly for machine learning models used in safety-critical systems.
no code implementations • 13 Mar 2019 • Hiroshi Kuwajima, Masayuki Tanaka, Masatoshi Okutomi
However, the inference process of deep learning is black-box, and not very suitable to safety-critical systems which must exhibit high transparency.
no code implementations • 7 Dec 2018 • Hiroshi Kuwajima, Hirotoshi Yasuoka, Toshihiro Nakae
To establish standard quality assurance frameworks, it is necessary to visualize and organize these open problems in an interdisciplinary way, so that the experts from many different technical fields may discuss these problems in depth and develop solutions.
no code implementations • 7 Dec 2017 • Hiroshi Kuwajima, Masayuki Tanaka
Safety critical systems strongly require the quality aspects of artificial intelligence including explainability.