Cybersecurity in the AI Era

                                               

Dr. Anita Aghaie                                                                Dr. Fabienne Bruendl
Siemens                                                                             Siemens
                                     

Siemens AG, Foundational Technologies, Cybersecurity and Trust department, Munich, Germany

Artificial Intelligence (AI) fundamentally reshapes cybersecurity by enhancing defensive capabilities while introducing new attack vectors. This article explores the dual role of AI through a three-dimensional framework: cybersecurity for AI, AI for cybersecurity, and AI against cybersecurity. It highlights key technical challenges and calls for a proactive, interdisciplinary approach to embed secure AI practices across the lifecycle, thereby positioning cybersecurity as a strategic enabler.

Introduction

AI has rapidly become a core pillar of digital business and is driving a fundamental shift in cybersecurity approaches across industries. As AI models and agents become more capable, they introduce both defensive opportunities and novel attack surfaces. Summarizing the central theme of the 2025 RSA conference, McKinsey succinctly captured this duality:

“AI is the greatest threat – and defence – in cybersecurity today.” [1]

For stakeholders across IT/OT, the question is no longer whether or how to ‘adopt AI’, but how to adopt it securely. The cybersecurity landscape is being reshaped not only by how AI is used, but also by how it must be protected and how it can be used against cybersecurity itself. Understanding and addressing the challenges of these three interrelated dimensions is essential for securing the digital infrastructure of modern industry and telecommunication.

The weakest link might be the smartest one: Cybersecurity for AI

In Europe, Article 15 of the EU AI Act mandates high-risk AI systems to ensure accuracy, robustness, and cybersecurity [2] , which in practice requires implementing technical safeguards, resilience strategies, and auditable processes. While this applies to high-risk AI systems, these principles are essential for all AI systems, covering the entire AI lifecycle.

A three-pillar framework for securing AI systems can be derived from the principles outlined in the EU AI Act, IEEE research on data integrity [3] , and operational threat models such as the OWASP Top 10 for Large Language Models (LLMs) [4] .

The first pillar is cryptographic technologies, such as secure multi-party computation (MPC) for collaborative analytics on sensitive data and fully homomorphic encryption (FHE) for privacy-preserving inference on encrypted inputs, along with complementary approaches like trustworthy execution platforms and federated learning (FL) with secure aggregation to protect model updates.

The second pillar is secure engineering practices, guided by standards like ISO/IEC 42001 [5] , the NIST AI Risk Management Framework (AI RMF) Generative AI (GenAI) Profile (NIST AI 600-1) [6] , and the Secure Software Development Framework (SSDF) for GenAI (NIST SP 800-218A) [7] .

Operational safeguards form the third pillar. Models should be treated as executable content, using sandboxing and provenance tracking. Continuous testing should be an integral part of operations to detect jailbreaks, model drift, and data exfiltration. These safeguards also address the OWASP Top 10 for LLM applications, including prompt injection, data poisoning, and supply-chain vulnerabilities.

As European standards, e.g., CEN-CENELEC JTC 21 [8] , evolve, organizations should align with existing frameworks to turn regulatory intent into concrete, testable controls for IT/OT environments.

AI for Cybersecurity: AI joins the blue team

AI is increasingly supporting defenders throughout the cyber defence lifecycle. Offensive security teams are now leveraging AI to simulate adversarial behaviour, e.g., crafting evasive malware, generating adversarial examples to bypass detection systems, and mimicking insider threats. LLM-enabled agents assist with automated reconnaissance and log analysis, while deep-learning models enhance hardware security through anomaly detection and side-channel evaluation.

Yet, deploying AI in critical environments demands strong safeguards, as mentioned above.


Illustration: Three dimensions of cybersecurity in the AI era

AI against Cybersecurity: The rise of autonomous offense

Attackers are industrializing AI. Deepfake-enabled social engineering has already driven large-scale fraud, while LLM-specific risks, e.g., prompt-injection, tool-abuse, and model supply-chain attacks are escalating. Techniques like ‘package hallucination’ and slop squatting exploit hallucinated dependencies to trick developers into importing malicious code [9] .

AI tools are also transforming penetration testing by automating reconnaissance, vulnerability scanning, and exploit generation. A fully autonomous AI-driven pentesting tool has even reached the top spot on HackerOne’s US leaderboard [10] , demonstrating the efficiency of AI-based offensive systems.

Conclusion

As AI reshapes cybersecurity, its dual role as defence tool and attack vector demands a fundamental shift in securing digital infrastructure, starting with the AI systems themselves. Cryptographic innovation, secure engineering, and operational safeguards must become core practices, with regulations like the EU AI Act serving as a starting point. Yet technical and operational challenges remain. Privacy-preserving technologies such as FHE are still computationally expensive and difficult to scale, depending on the use case. Hallucination in generative or autonomous models poses significant operational risk, requiring built-in verification and containment strategies to prevent unintended or unsafe behaviour.

Securing AI is not just about protecting algorithms, but also about reinforcing the trustworthiness and resilience of digital systems. As adversaries increasingly use AI against cybersecurity, defenders must stay ahead through continuous innovation, rigorous engineering, and cross-disciplinary collaboration. The organizations that succeed will be those that embed secure AI practices deeply into their operations, transforming cybersecurity from a reactive necessity into a strategic enabler.

References

[1] Charlie Lewis, Ida Kristensen, Jeffrey Caso, Julian Fuchs, “AI is the greatest threat—and defense—in cybersecurity today. Here’s why”, McKinsey Blog, May 15, 2025, https://www.mckinsey.com/about-us/new-at-mckinsey-blog/ai-is-the-greatest-threat-and-defense-in-cybersecurity-today

[2] Regulation (EU) 2024/1689 (EU AI Act), Official Journal of EU, https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

[3] Davi Ottenheimer and Bruce Schneier, “The AI agents of tomorrow need data integrity,” IEEE Spectrum, August 18, 2025, https://spectrum.ieee.org/data-integrity

[4] OWASP Top 10 for LLM Applications, https://owasp.org/www-project-top-10-for-large-language-model-applications/

[5] ISO/IEC 42001:2023 – AI Management System Standard (ISO), https://www.iso.org/standard/42001

[6] NIST AI 600-1, AI Risk Management Framework: Generative AI Profile, https://www.nist.gov/itl/ai-risk-management-framework

[7] NIST SP 800-218A, Secure Software Development Practices for Generative AI (July 2024), https://csrc.nist.gov/pubs/sp/800/218/a/ipd

[8] CEN-CENELEC JTC 21 Artificial Intelligence – committee overview, https://www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/

[9] Sean Park, Trend Micro, “Slopsquatting: When AI Agents Hallucinate Malicious Packages,” June 5, 2025, https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/slopsquatting-when-ai-agents-hallucinate-malicious-packages

[10] Nico Waisman, “The road to Top 1: How XBOW did it,” XBOW Blog, June 24, 2025, https://xbow.com/blog/top-1-how-xbow-did-it