no code implementations • 23 Jun 2024 • Ozan Vardal, Richard Hawkins, Colin Paterson, Chiara Picardi, Daniel Omeiza, Lars Kunze, Ibrahim Habli
A critical part of this is to be able to monitor when the performance of the model at runtime (as a result of changes) poses a safety risk to the system.
no code implementations • 19 Mar 2021 • Danny Weyns, Bradley Schmerl, Masako Kishida, Alberto Leva, Marin Litoiu, Necmiye Ozay, Colin Paterson, Kenji Tei
Two established approaches to engineer adaptive systems are architecture-based adaptation that uses a Monitor-Analysis-Planning-Executing (MAPE) loop that reasons over architectural models (aka Knowledge) to make adaptation decisions, and control-based adaptation that relies on principles of control theory (CT) to realize adaptation.
no code implementations • 2 Mar 2021 • Colin Paterson, Haoze Wu, John Grese, Radu Calinescu, Corina S. Pasareanu, Clark Barrett
We introduce DeepCert, a tool-supported method for verifying the robustness of deep neural network (DNN) image classifiers to contextually relevant perturbations such as blur, haze, and changes in image contrast.
1 code implementation • 2 Feb 2021 • Richard Hawkins, Colin Paterson, Chiara Picardi, Yan Jia, Radu Calinescu, Ibrahim Habli
Machine Learning (ML) is now used in a range of systems with results that are reported to exceed, under certain conditions, human performance.
no code implementations • 28 Nov 2019 • Colin Paterson, Radu Calinescu, Chiara Picardi
Regions of high-dimensional input spaces that are underrepresented in training datasets reduce machine-learnt classifier performance, and may lead to corner cases and unwanted bias for classifiers used in decision making systems.
no code implementations • 10 May 2019 • Rob Ashmore, Radu Calinescu, Colin Paterson
Our paper provides a comprehensive survey of the state-of-the-art in the assurance of ML, i. e. in the generation of evidence that ML is sufficiently safe for its intended use.