Among the many sources of event data available today, a prominent one is user interaction data.
Predictive process monitoring is a subfield of process mining that aims to estimate case or event features for running process instances.
In this paper, we use the recently developed Digital Twins of Organizations (DTOs) to assess the impact of (process-aware) information systems updates.
Our precision and fitness notions are an appropriate way to generalize quality measures to the object-centric setting since we are able to consider multiple case notions, their dependencies and their interactions.
The premise of this paper is that compliance with Trustworthy AI governance best practices and regulatory frameworks is an inherently fragmented process spanning across diverse organizational units, external stakeholders, and systems of record, resulting in process uncertainties and in compliance gaps that may expose organizations to reputational and regulatory risks.
Process mining is a scientific discipline that analyzes event data, often collected in databases called event logs.
Previously, an incremental discovery approach has been introduced where a model, considered to be under construction, gets incrementally extended by user-selected process behavior.
We propose a framework that adds an explainability level onto concept drift detection in process mining and provides insights into the cause-effect relationships behind significant changes.
The real-time prediction of business processes using historical event data is an important capability of modern business process monitoring systems.
Process comparison is a branch of process mining that isolates different behaviors of the process from each other by using process cubes.
The discipline of process mining aims to study processes in a data-driven manner by analyzing historical process executions, often employing Petri nets.
We demonstrate the feasibility of this framework by proposing an approach underpinned by the framework for organizational model discovery, and also conduct experiments on real-life event logs to discover and evaluate organizational models.
The strong impulse to digitize processes and operations in companies and enterprises have resulted in the creation and automatic recording of an increasingly large amount of process data in information systems.
This paper proposes new approximation techniques to compute approximated conformance checking values close to exact solution values in a faster time.
Conformance checking is concerned with quantifying the quality of a business process model in relation to event data that was logged during the execution of the business process.
We show that the presence of such chaotic activities in an event log heavily impacts the quality of the process models that can be discovered with process discovery techniques.
This extended paper presents 1) a novel hierarchy and recursion extension to the process tree model; and 2) the first, recursion aware process model discovery technique that leverages hierarchical information in event logs, typically available for software systems.
Refinements of sensor level event labels suggested by domain experts have been shown to enable discovery of more precise and insightful process models.
However, events recorded in smart home environments are on the level of sensor triggers, at which process discovery algorithms produce overgeneralizing process models that allow for too much behavior and that are difficult to interpret for human experts.
In process mining, precision measures are used to quantify how much a process model overapproximates the behavior seen in an event log.
The aim of process discovery, originating from the area of process mining, is to discover a process model based on business process execution data.
Local Process Models (LPM) describe structured fragments of process behavior occurring in the context of less structured business processes.
Local Process Model (LPM) discovery is focused on the mining of a set of process models where each model describes the behavior represented in the event log only partially, i. e. subsets of possible events are taken into account to create so-called local process models.
We present a statistical evaluation method to determine the usefulness of a label refinement for a given event log from a process perspective.
We show that when process discovery algorithms are only able to discover an unrepresentative process model from a low-level event log, structure in the process can in some cases still be discovered by first abstracting the event log to a higher level of granularity.
The technique presented in this paper is able to learn behavioral patterns involving sequential composition, concurrency, choice and loop, like in process mining.