Confiance.AI Project: deploying AI in the service of critical systems

Led by IRT SystemX, the Confiance.AI project was completed in September 2024 after four years of work. Focused on the question of trust in AI-based systems, it fostered fruitful collaborations, notably between IRT Saint Exupéry and IRT SystemX. The research addressed two key conditions for the use of AI in embedded systems: ensuring trust in these systems and enabling their deployment on embedded processors.

On the verification and validation side, the project adopted the Assurance Cases method to provide engineers with concepts and tools to formalize the reasoning that demonstrates compliance with the system’s expected properties.

Already used in several industrial domains, this method relies on the systematic decomposition of properties to make them easier to demonstrate. It is particularly relevant in the context of AI, where trust cannot be based solely on practices validated by usage and standards. Demonstrating certain lower-level properties requires state-of-the-art methods, some of which were developed at IRT Saint Exupéry (such as explainability and robustness). The research teams implemented the method in the model-based engineering tool CAPELLA, enabling a tight and formal integration of the design, verification, and validation phases.

Regarding deployment on embedded processors, the work focused on synthesizing requirements related to size, weight, power consumption, response times, reliability, certification, and ultimately, cost. Identifying the hardware and software configuration that meets these constraints is a complex problem, typically addressed by combining feedback from past projects, analysis, and above all, experimental exploration.

Confiance.AI Project: deploying AI in the service of critical systems
Scroll to top