Search Results for author: Ibrahim Habli

Found 8 papers, 1 papers with code

A Framework for Assurance of Medication Safety using Machine Learning

no code implementations11 Jan 2021 Yan Jia, Tom Lawton, John McDermid, Eric Rojas, Ibrahim Habli

As healthcare is now data rich, it is possible to augment safety analysis with machine learning to discover actual causes of medication error from the data, and to identify where they deviate from what was predicted in the safety analysis.

BIG-bench Machine Learning

Guidance on the Assurance of Machine Learning in Autonomous Systems (AMLAS)

1 code implementation2 Feb 2021 Richard Hawkins, Colin Paterson, Chiara Picardi, Yan Jia, Radu Calinescu, Ibrahim Habli

Machine Learning (ML) is now used in a range of systems with results that are reported to exceed, under certain conditions, human performance.

BIG-bench Machine Learning

The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

no code implementations1 Sep 2021 Yan Jia, John McDermid, Tom Lawton, Ibrahim Habli

Established approaches to assuring safety-critical systems and software are difficult to apply to systems employing ML where there is no clear, pre-defined specification against which to assess validity.

BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)

A Principles-based Ethics Assurance Argument Pattern for AI and Autonomous Systems

no code implementations29 Mar 2022 Zoe Porter, Ibrahim Habli, John McDermid, Marten Kaas

An assurance case is a structured argument, typically produced by safety engineers, to communicate confidence that a critical or complex system, such as an aircraft, will be acceptably safe within its intended context.

Ethics

Review of the AMLAS Methodology for Application in Healthcare

no code implementations1 Sep 2022 Shakir Laher, Carla Brackstone, Sara Reis, An Nguyen, Sean White, Ibrahim Habli

In recent years, the number of machine learning (ML) technologies gaining regulatory approval for healthcare has increased significantly allowing them to be placed on the market.

Unravelling Responsibility for AI

no code implementations4 Aug 2023 Zoe Porter, Joanna Al-Qaddoumi, Philippa Ryan Conmy, Phillip Morgan, John McDermid, Ibrahim Habli

As part of a conscious effort towards 'unravelling' the concept of responsibility to support practical reasoning about responsibility for AI, this paper takes the three-part formulation, 'Actor A is responsible for Occurrence O' and identifies valid combinations of subcategories of A, is responsible for, and O.

valid

What's my role? Modelling responsibility for AI-based safety-critical systems

no code implementations30 Dec 2023 Philippa Ryan, Zoe Porter, Joanna Al-Qaddoumi, John McDermid, Ibrahim Habli

Many authors have commented on the "responsibility gap" where it is difficult for developers and manufacturers to be held responsible for harmful behaviour of an AI-SCS.

Cannot find the paper you are looking for? You can Submit a new open access paper.