Artificial Intelligence for 5G and Beyond – an overview

Artificial Intelligence for 5G and Beyond

An overview


Dr. Maria Barros Weiss                                        Anastasius Gavras
Eurescom                                                                Eurescom

The advent of powerful ICT and the availability of large amounts of data have triggered an increased interest in the discipline of Artificial Intelligence (AI). The digital transformation of economy and society has brought data ecosystems to the core of many vertical industries. This has catapulted AI from niche discipline to the front of recent ICT research trends, including network research.

In networks, usage and status monitoring data are being collected since the start of modern communications. The main reasons were to be able to better serve and charge the costumer and to identify malfunctions of the network. Over the years, many other motivations for network data collection were added, for example resource usage optimisation, prevention of fraud and misuse, and compliance with legal requirements, just to name a few. The requirements behind these motivations can be met by following proper analysis of network data and by taking corresponding actions. However, the complexity of networks increased dramatically, originally due to the introduction of different types of technologies, lately due to the introduction of virtualisation. On the other side, the digital transformation increased demands on the network, side by side with its increased capacity and complexity. This rendered manual management impossible, as it became essentially impossible for humans to ­assess and intervene in the network manually, and control actions to keep the networks operational.

Introduction of AI in networks

To cope with the increased traffic, automation processes have been introduced, which take actions based on pre-defined rules. Such actions have been structured according to the affected areas of concern, being Fault, Configuration, Accounting, Performance and Security, which form the FCAPS framework for network management defined by ISO and ITU-T.

The increased network complexity, though, made it difficult for humans to define appropriate rules and to understand the effects of these rules. Furthermore, the observation that certain trends in network behaviour can be predicted and actions taken in anticipation, led to the introduction of AI techniques, and especially Machine Learning (ML) as a subset of AI, in networks. The introduction of AI was facilitated by the large amount of data available to train AI models. The more data is available, the better the resulting models. The introduction of AI has to be justified economically, as there is a cost for transmission, processing, storage and for protecting confidential and privacy-related information embedded in the ­data.

Data ecosystems at the core of AI systems

Many market reports induce speculation about increased productivity and additional value ­added services that increase revenues stimulated by AI, without being able to provide concrete figures. Speculation will be reduced, when some uncertainties about AI and certain concerns with a strong link to data are answered.

The fuel of AI is data. The quantity and diversity of data as well as the type of data matter. Furthermore, a whole set of processing is needed to make the data useful and meaningful – the data ecosystem. Nowadays data ecosystems are an essential part of businesses using AI. The quantity of data, type, and diversity needs to be meaningful and trusted to be useful. Data ecosystems include a wide range of tasks: capturing filtering, cleaning, and translating data to machine language; analysing it, giving it a meaning, inferring from that meaning; modelling, learning patterns and estimating behaviours; as well as applying data in processes, and storing and protecting data. Trust in the data is essential, because companies will rely on the data to understand their customers and to make decisions. From the user perspective, there is another dimension where trust is also imperative, that is the trust in the security of the systems. A number of areas still needs further development for AI to be implemented and deployed efficiently in networks, and in particular to handle data. Among others the main concerns are:

  • standardized interfaces for data and knowledge exchange
  • quality of data that is used for the algorithms training and modelling
  • sheer volume of data
  • business and personal trust in the data meaning, and data handling

Such concerns make data ecosystem areas grow in importance, side by side with artificial intelligence. They are not telecom industry specific but related with digitalization, and they affect other sectors as well. This is the anchor point to illustrate that networks can also serve AI applications. Beyond that AI has been more and more serving networks, as the following example suggests.

AI in networks today

Many telecom operators are already applying AI across their operations, while others are still formulating their AI strategies. An Ericsson report published in 2019 indicates that more than 50{b28ae05319d94bff0b4d65c5a9f4524dd588360f05c61ef440e1608e0a1c4144} of the telecom operators anticipate the introduction of AI techniques in 2020, while another 20 {b28ae05319d94bff0b4d65c5a9f4524dd588360f05c61ef440e1608e0a1c4144} have a time horizon of three to five years [1].

Yet, many telecom operators have started with trials and AI-based operations some time ago. Early adopters include Telefonica, AT&T, Deutsche Telekom and SK Telecom. The latter, the major Korean operator, is applying AI-based predictive analytics to improve network management and network optimization. This is being accomplished with the evolution of its advanced next generation operation support system in order to cover management automation within the fixed and mobile network domains. According to Park Jin-hyo, Head of the Network Technology R&D Center at SK Telecom, “The AI-assisted network operation technology based on big data analytics will be essential in the 5G era” [2]. Telefonica was in the news with the use of AI predictive analytics in the context of service operations centres, to deliver more insight into how its mobile networks are being used, to anticipate problems such as the “silent churners”, and to identify new ways for improving user experience through a customer-interaction platform based on cognitive intelligence. AT&T has been investing in AI for a while now, namely in the service assurance area, in order to continuously predict failures and solve network degradation via automation tools, or the application of ML techniques to help prevent, detect and mitigate cyber-attacks. Deutsche Telekom reported the use of AI for cyber-defence already in 2018 [3].

AI in future networks

Recent Horizon 2020 research projects in the 5G PPP programme, such as SELFNET, CogNet, and SliceNet have demonstrated numerous use cases in which AI/ML can help network optimisation, mitigate network performance problems, or protect the network against attacks. Newer projects in 5G PPP phase III are currently expanding the scope of application of AI/ML in networks.

The next example illustrates a very promising application of AI in future networks. Today spectrum is allocated statically and devices are pre-configured to use a certain spectrum range. The goal is to dynamically find available frequencies for transmission that are optimised for specific use cases and to dynamically use frequencies on demand. Beyond reactive allocation, prediction algorithms could anticipate available frequencies in the future. More sophisticated possibilities are to collaboratively maximize wireless communication capacity within a specific area among a large number of communicating entities, or minimise the collective energy expense for a given transmission capacity envelope.

Standardisation of AI in 5G

Things in standardisation are moving as well. In the last years most major telecom standardisation organisations like 3GPP, ETSI or ITU-T have initiated work that studies the introduction of such techniques in the telecoms networks.

Notably 3GPP formalized the use of AI in 5G networks by introducing the Network Data Analytics Function (NWDAF) in the core network. Beyond functions like network slice selection or policy-based charging, the scope of NWDAF extends to inter-domain interaction of data analytics in the 5G system, such as the interaction of Operation Administration and Management (OAM) and the Radio Access Network (RAN).

ETSI ENI (Experiential Networked Intelligence) aims to design a reference architecture to enable the use of AI in network operation and management. The ENI engine interfaces with the existing network to enhance the AI capability of the network. Up to now, ENI has developed use cases, requirements, and a preliminary architecture and interfaces. The work of ENI has been planned to continue until 2021.

ETSI ZSM (Zero touch network and Service Management) focuses on service automation and management that leverages the principles of software networks. The goal of ZSM is to define a new, future-proof, end-to-end interoperable framework enabling agile, efficient and qualitative management and automation of emerging and future networks and services.

ITU-T FG-ML5G (Focus Group on Machine Learning for Future Networks including 5G), focuses on specifications for ML for future networks, including interfaces, network architectures, protocols, algorithms and data formats. The group collected a set of use cases, identified high-level requirements derived from these use cases and proposed a high-level unified ML architecture for future networks that satisfy these requirements.

Trustworthiness is the main challenge

In 2019 the European high-level expert group on AI, appointed by the European Commission, published a set of recommendations for Trustworthy Artificial Intelligence. The recommendations include:

  • Transparency, which means that decisions should be comprehensibly explained to humans, who should be informed when interacting with an AI system.
  • Accountability, which means that responsibilities of the AI system outcomes are clearly assigned. Ultimately this is important for liabilities in case something goes wrong.
  • Technical robustness and safety, which means that the AI systems must be accurate, reliable and reproducible. A fail-safe fallback plan must be in place, if something goes wrong.

Because of certain real-world examples where AI/ML applications did not live up to the expectations, we became suspicious about how much we can trust AI/ML. The web is full of stories in which AI/ML algorithms were fooled by very simple tricks and reached totally wrong conclusions. In network operators’ imagination, this can mean network shutdown and total loss of revenue. So, how can we increase the trust in AI/ML, and how can we prevent worst-case scenarios? In some cases this translates into concepts like Human-in-the-Loop. Originally described as a model for human-machine interaction, the human is introduced as an emergency break for an AI/ML algorithm that went amok. Even if this is counterintuitive in view of the goal of full automation, the fact that the networks are a critical infrastructure of society seems to justify this approach, at least as long as the confidence level over the conclusions of an AI/ML algorithm, which is perceived today as a “black box”, is not significantly increased.

Conclusion and outlook

A general consensus exists that Artificial Intelligence is the technique to enhance returns on future network investments. In order to reap the anticipated benefits and to justify the investments in AI systems in the first place, the following actions must be accelerated: (i) define standard interfaces to access relevant data, (ii) study the use of AI to enhance customer experience, (iii) trial and experiment with new customer segments and characterize opportunities, (iv) expand the use of AI for network operations, (v) facilitate early adoption of AI-enabled solutions by new use cases. Most importantly we must increase trust in AI system outcomes by introducing transparency, accountability and technical robustness.





© AdobeStock