Safety


This demo-site has now moved to www.tecosa.center.kth.se

Please go there for further updates: www.tecosa.center.kth.se


Edge computing opens an opportunity for real-time processing of large amounts of data and real-time decision making in a wide range of systems. This is associated with a high degree of uncertainty.

Challenges and goals

Edge computing opens an opportunity for real-time processing of large amounts of data and real-time decision making in a wide range of systems including multi-agent and human-in-the-loop systems. Typical applications of edge computing are associated with a high degree of uncertainty, not only due to the complexity of the application scenario itself, but also due to the fact that machine learning algorithms (which are extremely well-fitted for implementation at the edge) do not come with performance guarantees.

Applying worst-case design principles in such settings is not a viable approach. We need:

  • new techniques to analyze safety in systems that use machine learning as one of their key computational concepts
  • new safety architectures, safety monitors and risk and reliability models to enhance safety at runtime
  • new techniques to enhance safety in human-in-the-loop systems. Furthermore, we need to deploy safety assurance models to evaluate the overall safety achieved through the three above-mentioned approaches.

Tasks and Methodologies

The project will focus on three tightly coupled objectives.

First objective

The first objective of the project focuses on robustifying machine learning algorithms. Domain-specific classification through supervised machine learning allows to more reliably detect features and reduce the likelihood of false positives and negatives. We will develop novel methods and domainspecific modeling languages to allow engineers to declaratively express probabilistic models; to state what the model means, without specifying how it will be checked or executed. We will also develop falsification schemes that leverage probabilistic model checking methodology to study robustness of safety-critical edgebased systems and applications employing machine learning.

Furthermore, we will focus on reinforcement learning-one of promising approaches to decision making under uncertainty-in a safety-critical context. We intend to overcome the challenge of ensuring safe exploration in the physical world by utilizing probabilistic model checking-based correct-by-synthesis methods to suppress decisions potentially leading to unsafe states. The research challenges includes both theoretical aspects (learning method and semantics of modeling languages) and practical aspects (efficient compilation and runtime systems).

Second objective

The second objective relates to schemes by which safety-critical applications based on machine learning can be externally monitored to introduce further safety-enhancing features. We will develop safety monitors for edgebased systems and applications which can reason about certain safety properties of the systems, and potentially throttle the system down in case of the detection of a critical behavior. Such monitoring and risk-reducing approaches must be accompanied by safety architectures controlling the system mode of operations, considering the edge and its context for proper error handling including graceful degradation. Next to devising them,we aim to experimentally validate their fault tolerance, e.g. using systematic fault injection. This objective will leverage stochastic risk models and their analysis, as well as methods for solving Markov models.

Third objective

Finally, the third objective comprises of investigating the relationship between safety and system usability, in particular if the safety-critical context includes interaction with humans. We are interested in the relationship between providing safety-related feedback and the human perception of the system. Safe systems may not necessarily be perceived as safe, whereas unsafe systems might be perceived as such depending on the type, form and structure of the perceptual feedback provided. Understanding these trade-offs is vital to good system design.

Finally, with respect to human-robot collaboration, safety can drastically be increased by the realtime detection of the human actions and intention recognition. However, capturing multimodal signals from the human that can feed such representations is far from trivial. We are interested in the effectiveness of different approaches and how they can increase safety and usability in edge systems and applications.


Contacts

Project manager

Jana Tumova

Jana Tumová
Formal methods, Artificial Intelligence
Profile

Co-project manager

Iolanda Leite

Iolanda Leite
Human-machine interaction, Machine learning
Profile

Co-project manager

Martin Törngren

Martin Törngren
Systems & safety engineering, Embedded control systems
Profile

Co-project manager

David Broman

David Broman
Programming models, Security SW Eng., Machine learning
Profile


Atlas Copco logo
Ericsson logo
Syntell logo
Einride logo
Safety Integrity logo
Elekta logo
Synective Labs logo