Modern interacting digital systems are becoming increasingly complex, making it difficult to ensure their actual behavior aligns with design-time expectations, particularly in uncertain or dynamic environments, even when specifications are correct. This misalignment affects system scalability, reliability, and increases maintenance costs. We introduce a conceptual framework for identifying and self-explaining mismatches between expected and observed system behavior, together with an algorithm that generates explanations and case studies that apply the conceptual framework for explanation generation in an interacting digital systems setting.
Abstractions of signals help to avoid cognitive and storage overload, especially in systems with many signals such as cyber-physical systems (CPS). For assessing trends and reconstructing behavior, it often suffices to have a rough understanding how signals evolve, e.g., whether their behavior is monotonic or periodic. This work provides a configurable abstraction of signal behavior, where signals are described by a sequence of oscillation and linear patterns. We formalize templates of both behaviors in parameterized signal temporal logic (PSTL) and provide an algorithm that abstracts signals in terms of those patterns. The templates are configurable such that they allow for choosing the level of abstraction, e.g., by limiting the approximation error or the minimal oscillation frequency. For segmentation of the signal, we solve an optimization problem using a modified version of TeLEx. The evaluation demonstrates that configuration of the patterns is suitable to define the level of abstraction. Further, on control output from the ARCH wind turbine benchmark and flow data from medical ventilation, it demonstrates that the abstraction method can be applied to real-world signals.
Neural networks (NNs) have great potential to improve individualization of medicine, e.g., through analysis of signals. However, they are generally not interpretable. Understanding NN decisions is crucial, especially in safety-critical domains such as medicine. This work presents a new method to provide local explanations for classifications of signals made by NNs. Our method extends the Sig-LIME explanation method from one-dimensional signals to multidimensional signals by introducing new perturbation techniques. We evaluate the proposed method on an NN that classifies the positive end-expiratory pressure (PEEP) applied by a ventilator. The evaluation shows that the generated explanations are plausible, stable and concise.
@inproceedings{AMF:2025,
language = {USenglish},
author = {Alkhiyami, Mohammad and Martino, Gianluca and Fey, Goerschwin},
title = {Explaining Mismatches in Expected versus Perceived Behavior for Interacting Digital Systems},
booktitle = {Explanations with Constraints and Satisfiability (ExCoS)},
year = {2025}
}