This paper proposes a new metric to measure the calibration error of probabilistic binary classifiers, called test-based calibration error (TCE).
Researchers in explainable artificial intelligence have developed numerous methods for helping users understand the predictions of complex supervised learning models.
We derive 6 research topics and 12 practical challenges for fraud detection from this operational model.
Conformance checking is concerned with quantifying the quality of a business process model in relation to event data that was logged during the execution of the business process.
Predictive process monitoring is a family of techniques to analyze events produced during the execution of a business process in order to predict the future state or the final outcome of running process instances.
Data of sequential nature arise in many application domains in forms of, e. g. textual data, DNA sequences, and software execution traces.
In this paper, we investigate the performance of several sequence prediction techniques on the prediction of future events of human behavior in a smart home, as well as the timestamps of those next events.
Predictive process monitoring is concerned with the analysis of events produced during the execution of a process in order to predict the future state of ongoing cases thereof.
We show that the presence of such chaotic activities in an event log heavily impacts the quality of the process models that can be discovered with process discovery techniques.
Refinements of sensor level event labels suggested by domain experts have been shown to enable discovery of more precise and insightful process models.
However, events recorded in smart home environments are on the level of sensor triggers, at which process discovery algorithms produce overgeneralizing process models that allow for too much behavior and that are difficult to interpret for human experts.
In process mining, precision measures are used to quantify how much a process model overapproximates the behavior seen in an event log.
In this paper, we propose to first discover local process models and then use those models to lift the event log to a higher level of abstraction.
Local Process Models (LPM) describe structured fragments of process behavior occurring in the context of less structured business processes.
First, we show that LSTMs outperform existing techniques to predict the next event of a running case and its timestamp.
Local Process Model (LPM) discovery is focused on the mining of a set of process models where each model describes the behavior represented in the event log only partially, i. e. subsets of possible events are taken into account to create so-called local process models.
Finding the right event labels to enable application of process mining techniques is however far from trivial, as simply using the triggering sensor as the label for sensor events results in uninformative models that allow for too much behavior (overgeneralizing).
We show that when process discovery algorithms are only able to discover an unrepresentative process model from a low-level event log, structure in the process can in some cases still be discovered by first abstracting the event log to a higher level of granularity.
We present a statistical evaluation method to determine the usefulness of a label refinement for a given event log from a process perspective.
The technique presented in this paper is able to learn behavioral patterns involving sequential composition, concurrency, choice and loop, like in process mining.