The article “Cybersecurity in the AI era” presents the dual role of AI through a three-dimensional framework: cybersecurity for AI, AI for cybersecurity, and AI against cybersecurity highlighting key technical challenges and calls for a proactive, interdisciplinary approach to embed secure AI practices across the lifecycle, thereby positioning cybersecurity as a strategic enabler.
Continue readingEU Cybersecurity Framework
The article “EU Cybersecurity Framework” presents how the EU is building a unified defence through initiatives such as the EU Cybersecurity Act, the Cyber Resilience Act (CRA), and the NIS2 Directive — supported by key institutions.
Continue readingThe day the Internet died
“A bit beyond” explores whether AI is quietly undermining its own future in the Internet era. As generative AI increasingly trains on AI-created content, researcher warns of “model collapse,” where systems lose diversity, accuracy, and coherence.
Continue readingThe 6G Industry Association updates its vision for 6G
This article presents the European perspective, represented by the 6G Smart Networks and Services Industry Association (6G-IA), on Europe’s proactive role in 6G Research and Development (R&D) and standardisation, ad[1]dressing societal, environmental, economic and market challenges.
Continue readingEditorial
Dear readers,
As Europe accelerates its digital and AI ambitions, cybersecurity stands as both the enabler and threat! Artificial intelligence is transforming the digital landscape exponentially! It is redefining how we create, communicate, and secure information — while simultaneously expanding our vulnerability. As algorithms grow more capable, so do the threats that exploit them. Cybersecurity in the AI era is no longer a question of defence alone, but of foresight, adaptability, and trust. This new frontier calls for coordinated action: bringing technology enthusiasts, researchers, policymakers, and industry leaders around a shared vision of technological sovereignty and trust. In the AI era, security cannot be an afterthought; it must be the cornerstone of innovation and digital governance.
The Kennedy’s perspective leads to a thought-provoking reflection on the time when innovation and risk evolve hand in hand. Drawing inspiration from the classic “ambulance in the valley,” the article challenges us to rethink whether we are investing enough in prevention rather than repairing the consequences. In his article he includes Scott Adams Six Filters of Truth that help us to reflect i.e., what’s true and what’s false; read the article and share your thoughts!
The invited article on the Cybersecurity in the AI era presents the dual role of AI through a three-dimensional framework: cybersecurity for AI, AI for cybersecurity, and AI against cybersecurity. It highlights key technical challenges and calls for a proactive, interdisciplinary approach to embed secure AI practices across the lifecycle, thereby positioning cybersecurity as a strategic enabler.
Gain insights into the evolving Digital Partnership on Cybersecurity in the Indo-Pacific region through the article “INPACE – Emerging Cybersecurity Architecture of Digital Partnership Countries.” The piece highlights how India, Japan, South Korea, and Singapore are shaping comprehensive cybersecurity frameworks to safeguard critical information infrastructure, strengthen public–private collaboration, and advance cyber resilience. Together, these nations are building a robust digital foundation — one that blends policy, regulation, and innovation to address the complex challenges of the connected world.
The foundations of cybersecurity must evolve with Artificial Intelligence reshaping the digital world. At the core lies the secure boot process — the mechanism that ensures only trusted software runs on a device, protecting systems from tampering and unauthorized access. Eurescom led EU-funded FORTRESS project is developing a hybrid secure boot architecture that combines classical and post-quantum cryptography. By reinforcing digital trust at the root, FORTRESS contributes to a new generation of AI-ready cybersecurity, where resilience begins not in reaction to threats, but in the very design of secure, intelligent systems.
As cyber threats grow in scale and sophistication, the European Union is advancing on framework to ensure that trust, resilience, and security remain at the core of its digital transformation. The article “EU Cybersecurity Framework” presents how the EU is building a unified defence through initiatives such as the EU Cybersecurity Act, the Cyber Resilience Act (CRA), and the NIS 2 Directive — supported by key institutions. These efforts form the backbone of Europe’s mission to safeguard its digital future and empower citizens, businesses, and public institutions to thrive securely in the digital age.
In the article “A bit beyond” the author explores whether AI is quietly undermining its own future in the Internet era. As generative AI increasingly trains on AI-created content, researcher warns of “model collapse,” where systems lose diversity, accuracy, and coherence. With synthetic data now saturating the web and ad-based revenue models under strain, both AI reliability and the digital economy face growing uncertainty. The article calls for authentic, human-generated content and transparent data practices to sustain trust, creativity, and truth in the AI-driven Internet.
This edition of Eurescom’s Message continues our mission to share insights and perspectives that shape the future of connectivity.
We warmly invite your feedback and ideas for upcoming issues. Write to us at and let us know which topics you’d like us to explore next. Your input helps us make each edition more relevant, inspiring, and impactful.
Enjoy reading!
Pooja Mohnani
Editor-in-chief
Mobility management, cybersecurity and explainable AI for airspace communications in 6G-SKY
The essential role of information and communication systems in Smart Grids is presented in context of SNS JU lighthouse project SUSTAIN-6G on integrating sustainability into the development of 6G communication technologies, aiming to align technological advancements with environmental goals by assessing various stakeholders and domains requirements.
Continue readingAI Techniques for the 6G AI-AI
Discover how the Project CENTRIC embeds sustainablility in AI-native Air-Interface and designs systems for 6G networks, revolutionizing wireless communication through user-centricity.
Continue readingEditorial
Dear readers,
The evolution of Artificial Intelligence (AI) traces decades back, from an idea conceptualization in the halls of academia to its use in real-world scenarios. In this ever-evolving technological landscape, transformative influence of artificial intelligence across various sectors has a profound and disruptive impact on us. As we make new learnings every day the evolution and revolution of AI seems like an ongoing process, marked by continuous innovation and exploration.
Eurescom embraces more than three decades of experience in managing multinational collaborative R&D projects, programmes, and initiatives in the ICT sector. It recognises the need for aligning technology development with a value-based consideration and prioritization of different economic and social outcomes in the development of 6G networks, in the context of EU research funding frameworks. Hence, in the first article of the cover theme, we attempt to present “The Role of AI/ML in Key Value Indicator Analysis” and propose a value-driven AI technology development which is one of the important drivers in the development of innovative 6G technology.
This issue is a innovative collection of inside view of selected EU research activities from the plethora of projects that present the diverse R&D activities taking place at Eurescom. The article from the project PAROMA-MED presents how researchers are pushing the boundaries of AI capabilities, tackling new challenges, to “Empowering Collaborative Intelligence: The Federated learning approach” and promise advancements in health applications.
As we stand at the vertex of a new era, we recognize not only the revolutionary potential of AI but also its evolutionary journey; the project CENTRIC contributes to “AI Techniques for the 6G AI-AI” and develops a sustainable AI-native Air-Interface for 6G networks and utilizes advances in machine learning (ML) to enable the development and discovery of efficient waveforms, custom modulations, and transceivers for the physical layer as well as customized lightweight communication protocol and sustainable radio resource management.
Whilst the AI – Telecommunications landscape explores new dimensions, in the article “AI/ML in Telecommunications Networks” authors delve into the multifaceted role of AI/ML in shaping the future of telecommunication networks, and provide recommendations concerning the future availability of large data sets, which are necessary for training and benchmarking algorithms.
As AI continues to evolve, our approach towards its development and deployment, the “Innovations using AI/ML in project 6G-BRAINS”, attempt for seamless and efficient wireless connectivity. Presented are innovative approaches to network resource management and spectrum utilization using cutting-edge AI technologies.
The edition includes a very interesting and retrospective article – the KENNEDY Perspective on “AI or not AI”. Will AI help us to succeed? How will AI help me in real life? as there is a juggle on whether to trust these systems and to what extent!
This issue further covers a variety of articles on different, ICT-related activities. Under “Events”, we report about participation of Eurescom projects at MWC Barcelona 2024, the world’s largest telecoms event where technology, community and commerce converge. In our “News in brief” section, we update about what’s new at Eurescom – a short overview on the newly started projects.
Finally, in the latest “A bit beyond” we engage in crucial AI act that is recently passed in the European Parliament and it awaits reading in the EU Council.
Together with my editorial colleagues I believe that you will find value in this edition of Eurescom’s message, and we would appreciate your comments on the current issue as well as suggestions for future issues.
Enjoy reading our magazine!
Pooja Mohnani
Editor-in-chief
AI Act – a conscious choice

What is AI and why is it important?
AI is the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity [1]. It enables systems to perceive their environment, solve problems and act to achieve a specific goal. These systems gather data – through attached sensors, process this data and respond. They are capable of optimizing and thus adapting their behaviour by learning and work autonomously.
Whilst some AI technologies have been around for more than 50 years, recent advances in computing power, with availability of enormous amount of data and new algorithms, have led to major AI breakthroughs in recent years.
Artificial intelligence is already present, influences our everyday life and is digitally transforming our society and thus has become an EU priority.
What is AI ACT?
To prepare Europe for an advanced digital age, the AI ACT provides a comprehensive legal framework on AI, which addresses the risks of AI within Europe and sets the tone for upcoming AI regulations worldwide. The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI [2].
The AI Act aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensuring that AI in Europe respects set values and rules, and harnesses the potential of AI for industrial use.”
— European Parliament News
Why is it needed?
To make sure that AI systems used in EU are safe, transparent, traceable, unbiased, trustworthy and environmentally friendly. AI systems should be overseen by people, rather than by automation, thus should foster inclusiveness.
The AI Act fosters the development of trustworthy AI in Europe, which also includes the Innovation Package and the Coordinated Plan on AI [3] . Together, these measures assure the health, safety and fundamental rights of people, and providing legal certainty to businesses across Europe. Overall, these initiatives would strengthen EU’s AI talent pool through education, training, skilling and reskilling activities.
Whilst the existing legislation provides protection, however it is insufficient to address AI system specific challenges, the proposed rules will be able to:
- address risks created by AI applications/services;
- prohibit AI practices that pose unacceptable risks;
- determine a list of high-risk and set clear requirements for such applications;
- define specific obligations for deployers and providers of high-risk AI applications;
- require assessment before a given AI system is put into service or placed on the market;
- overall, establish a governance structure
To whom does the AI Act apply?
This legal framework is applicable to both public and private actors inside and outside the EU as long as the AI system affects people located in the EU [4].
It is a cause of concern for both the developer of such system as well as the deployers of AI systems (high-risk). Importers of AI systems must also ensure that the provider (foreign) carries out appropriate conformity assessment procedure, bears a European Conformity (CE) marking and is accompanied by required documentation and instructions of use.
In addition, certain obligations are foreseen for providers of general-purpose AI models, including large generative AI models. Providers of free and open-source models are exempted from most of these obligations; however, the exemption does not cover obligations for providers of general-purpose AI models with systemic risks.
Research, development and prototyping activities preceding the release in the market are exempted and the regulation furthermore does not apply to AI systems that are exclusively for military, defence or for national security purposes, regardless of the type of entity carrying out those activities [4].
What happens if you don’t comply?
AI systems that do not respect the requirements of the Regulation, would attract penalties, including administrative fines, in relation to infringements and communicate them to the Commission [4].
The Regulation sets out thresholds that needs to be taken into account:
- Up to €35m or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data;
- Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation, including infringement of the rules on general-purpose AI models;
- Up to €7.5m or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request;
To conclude, the AI Act for the ICT industry aims to foster innovation while ensuring that the developed AI technologies are deployed responsibly, ethically, and in the best interests of society. It would provide assurance to its users and guidance for stakeholders, overall contributing to the sustainable growth and adoption of AI technologies.
Further information
[2] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
[3] https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383
[4] https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683
