Project Detail |
With AI-enhanced components being deployed everywhere, including the very toolchains used for secure software development, the traditional security focus on software and hardware assets can no longer guarantee “secure services, processes and products, [and] digital infrastructures” in the EU Strategic Plan 2021-2024.
Sec4AI4Sec wants to develop security-by-design testing and assurance techniques for AI-augmented systems, their software and AI assets. AI-augment systems provide an opportunity to democratize security expertise and give access to intelligent, automated secure coding and testing, by enabling novel capabilities, lowering development costs and increasing software quality (AI4Sec). They are also a risk: AI-augmented systems are vulnerable to new security technical threats specific to AI-based software, in particular where matters of fairness or explainability are important (Sec4AI). Sec4AI4Sec addresses these challenges to the fullest extent: “AI for better security, security for better AI.”
The Sec4AI4Sec project will address these two facets of AI to achieve a deep scientific, economic and technological impact, while contributing to addressing key societal issues. It will validate its approach on three key scenarios of the EU Digital Compass towards Digital Sovereignty: 5G core virtualization, Autonomy for safety systems in aviation and security and Quality for 3rd party software assessment and certification.
Sec4AI4Sec assembled a team with 5 leading Universities (Amsterdam, Cagliari, Hamburg, Lugano, Trento), 2 innovative SMEs (FrontEndART, Pluribus One), 3 Large Enterprises (Airbus, SAP, Thales) and 1 Center for digital innovation (Cefriel). The project will generate a set of innovative techniques and open-source tools, new methodologies for secure design and certification of AI-augmented systems, as well as reference benchmarks that can be used to standardize the assessment of research results in the secure software research community. |