2026

  1. Automated Self-Explanation of Expected versus Perceived Behavior for Interacting Digital Systems
    Mohammad Alkhiyami, Gianluca Martino, and Goerschwin Fey
    In Design, Automation and Test in Europe Conference (DATE) 2026 | The European Event for Electronic System Design & Test, 2026

    Modern interacting digital systems are becoming increasingly complex, making it difficult to ensure their actual behavior aligns with design-time expectations, particularly in uncertain or dynamic environments, even when specifications are correct. This misalignment affects system scalability, reliability, and increases maintenance costs. We introduce a conceptual framework for identifying and self-explaining mismatches between expected and observed system behavior, together with an algorithm that generates explanations and case studies that apply the conceptual framework for explanation generation in an interacting digital systems setting.

  2. Configurable Abstraction of Signals Using Signal Temporal Logic
    Ulrike Engeln and Sibylle Schupp
    In Software Engineering and Advanced Applications, 2026

    Abstractions of signals help to avoid cognitive and storage overload, especially in systems with many signals such as cyber-physical systems (CPS). For assessing trends and reconstructing behavior, it often suffices to have a rough understanding how signals evolve, e.g., whether their behavior is monotonic or periodic. This work provides a configurable abstraction of signal behavior, where signals are described by a sequence of oscillation and linear patterns. We formalize templates of both behaviors in parameterized signal temporal logic (PSTL) and provide an algorithm that abstracts signals in terms of those patterns. The templates are configurable such that they allow for choosing the level of abstraction, e.g., by limiting the approximation error or the minimal oscillation frequency. For segmentation of the signal, we solve an optimization problem using a modified version of TeLEx. The evaluation demonstrates that configuration of the patterns is suitable to define the level of abstraction. Further, on control output from the ARCH wind turbine benchmark and flow data from medical ventilation, it demonstrates that the abstraction method can be applied to real-world signals.

2025

  1. Detecting Redundant Preconditions
    Nicola Thoben and Heike Wehrheim
    In 13th IEEE/ACM International Conference on Formal Methods in Software Engineering, FormaliSE@ICSE 2025, Ottawa, ON, Canada, April 27-28, 2025, 2025
  2. Explaining Mismatches in Expected versus Perceived Behavior for Interacting Digital Systems
    Mohammad Alkhiyami, Gianluca Martino, and Goerschwin Fey
    In Explanations with Constraints and Satisfiability (ExCoS), 2025
    @inproceedings{AMF:2025,
      language = {USenglish},
      author = {Alkhiyami, Mohammad and Martino, Gianluca and Fey, Goerschwin},
      title = {Explaining Mismatches in Expected versus Perceived Behavior for Interacting Digital Systems},
      booktitle = {Explanations with Constraints and Satisfiability (ExCoS)},
      year = {2025}
    }
  3. Explanation in Bio-inspired Computing: Towards Understanding of AI Systems
    Rolf Drechsler, Christina Plump, and Bernhard J. Berger
    In 1st International Conference on Artificial Intelligence for Computing, Astronomy and Renewable Energy, 2025

    Artificial intelligence methods and applications have recently seen a massive surge, partially caused by the success of neural networks in areas like image classification and LLMs for generating near-perfect natural-language texts. Unnoticed by the public, but highly important for many AI methods to function, bio-inspired optimisation techniques have also seen a rising usage. However, the more techniques are used, the more complex the explainability decreases. Even developers of Neural Networks can seldom state why the Neural Network’s results are what it is. The explainability of AI methods, as well as systems in general, is, however, essential for safety, security reasons, and to gain and maintain trust with system users. While research in explainability has therefore gained significant traction with prominent AI methods, such as neural networks, bio-inspired optimisation techniques have seen less research in this regard. The complexity of explainability with these algorithms lies in the use of populations and randomness. We present an approach to track individuals in bio-inspired optimisation techniques, aiming to improve our understanding of the quality of results from such optimisation algorithms. To that end, we introduce a data model, include this model in the standard implementations of these approaches, and provide a visualisation that allows for understanding the relational information of these individuals, yielding more insight into these optimisation techniques and providing a first step toward improved explainability.