Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Title: Intelligent Networking, AI and Machine Learning for Telecommunications Operators

...

Subtitle: Great progress in industry adoption, but challenges remain

Authored by the Members of the Linux Foundation AI Taskforce

Beth Cohen Beth Cohen (Verizon)

Lingli Deng Lingli Deng (CMCC)

Andrei Agapi 

Ranny Haiby 

Sandeep Panesar Table of Contents:


Beth Cohen  can we use drivers for AI side bars using following :

...

  • Network Planning and Design: Generative AI for precise small cell placement, MIMO antennas, beamforming, and optimized backhaul connections.
  • Self-Organizing Networks (SON): Harnessing AI-based algorithms for autonomous optimization and management of network resources
  • Shared Infrastructure: Leveraging 5G RAN infrastructure resources for training and inference, enhancing AI capabilities and network efficiency.
  • Code Generation for Network Protocols: Enabling co-pilot functionality for generating software implementations of network protocols specifications, facilitating protocol development and deployment.
  • Capacity Forecasting:  Utilizing AI for accurate load prediction to optimize network capacity and avoid unnecessary upgrades or poor network performance from overloaded nodes.

...

  • Network AIOps: Implementing AIOps methodologies to automate, streamline and improve overall network efficiency.
  • Predictive Maintenance: Utilizing AI to forecast equipment failures and improve maintenance efficiency
  • Technical Assistant/Customer Service: Real-time guidance from LLM trained tech assistants
  • Traffic Management: Dynamic rerouting of traffic based on AI analysis to efficiently utilize network resources and improve user experience.

...

Table of Contents:

Table of Contents
maxLevel2
outlinetrue

Executive Summary - Key Takeaways and Overview

0.25 page – Beth Cohen  will fill in when the paper is mostly complete.

Since LFN published the seminal Intelligent Networking, AI and Machine Learning While Paper in 2021, the telecom industry has seen tremendous growth in both interest and adoption of Artificial Intelligence and Machine Learning (AI/ML) technologies.  While it is still early days, the industry is now well past the tire kicking, lab testing phases that was then the state of the art in 2021.  Intelligent networking is coming into its own as telecoms increasingly use it for operational support; whether that means deploying intelligence into their next generation XG networks, or for automation of network management tasks such as ticket correlation and predictive maintenance.  The LFN and Open Source have a pivotal role to play in fostering and developing intelligent networking technologies through the continued support of key projects, ranging from building a common understanding of the underlying data models to developing infrastructure models and integration blueprints. Link to some of the projects like Anuket Thoth, Super Blueprint and others that are related to AI/ML

The future of Intelligent Networking and AI is in the hands of the individuals and organizations who are willing and able to contribute to new and existing projects and initiatives. If you are involved in building and operating networks, developing network technology or consuming network services, consider getting involved. Engaging with the OSS communities is a LFN projects and communities can be an educational and rewarding way to shape the future of Intelligent Networking. 

...

  • Intelligent networking is rapidly moving out of the lab and being deployed directly into production
  • Operational maintenance, and service assurance are still a priority, but there is increasing interest in using AI/ML to drive network optimization and efficiency
  • More research and development is needed to establish industry wide best practices with and a shared understanding of intelligent networking to support interoperability.
  • There has been some work on developing common or shared data sources and standards, but it remains a challenge
  • LFN and the Open Source community are key contributors to furthering the development of intelligent networking now and in the future

...

Background Intro/History Beth Cohen 

0.5 page

At their hearts, telecoms are technology companies driven by the need to scale their networks to service millions of users, reliably, transparently and efficiently.   To achieve these ambitious goals, they need to optimize their networks by incorporating the latest technologies to feed the connected world's insatiable appetite for ever more bandwidth.  To do this efficiently, the networks themselves need to become more intelligent.  At the end of 2021, a bit over 2.5 years ago, LFN published its first white paper on the state of intelligent networking in the telecom industry.  Based on a survey of over 70 of its telecom community members, the findings pointed to a still nascent field made up of mostly research projects and lab experiments, with a few operational deployments related to automation and faster ticket resolutions.  The survey did highlight the keen interest of its respondents had in the topic intelligent networking, machine learning and its promise for the future of the telecom industry in general

Fast forward to 2023 when at the request of the LFN Governing Board and Strategic Planning Committee, the LFN AI Taskforce was created to coordinate and focus the efforts that were already starting to bubble up in both new project initiatives (Thoth, Nephio, Network Super Blueprints, just to name a few) and within existing projects (Name some hereAnuket, ONAP).  The Taskforce was given the charter to look at and make recommendations on the following what direction LFN should go for the Governing Board:  .  Some of the areas that the Taskforce looked into included:

  • How to create and maintain public Networking data sets for research and development of AI applications? (Ranked #1 in GB member survey)
  • What are some feasible goals (short term) in creation of AI powered Network Operations technologies
  • Evaluate the existing Networking AI assets coming from member company contributions
  • Analyze generic base AI models and recommend the creation of Network specific base models (Ranked high in GB member survey)
  • Recommended approaches and the potential for open source projects to contribute to the next generation of intelligent networking tools.

...

Challenges and Opportunities Beth Cohen 

1 page - Focus on Telco pain points only

...

Ironically, as generative AI and LLM adoption becomes widespread in many industries, telecom has lagged somewhat due to a number of valid factors.  As was covered in the previous white paper, the overall industry challenges remain the same, that is the constant pressure to increase the efficiency and capacity of operators’ infrastructures to delivery more services to customers for lower operational costs.  The complexity and lack of a standard understanding of network traffic data remains a barrier for the industry to speed the adoption of AI/ML to optimize network service delivery.  Some of the challenges that are motivating continued research and adoption of intelligent networking in the industry include: 

Common Telecom Pain Points - Beth Cohen 

  • Operational Efficiency: The continuing need to reduce costs and errors, potentially increase margins
  • Network Automation: Right-sizing network hardware and software, optimizing location placement
  • Availability: Identifying single points of failure in systems to improve equipment maintenance efficiency
  • Capacity Planning: Avoid unnecessary upgrades or poor network performance from overloaded nodes.

Challenges with Achieving Full Autonomy Lingli Deng Andrei Agapi 

0.5 page

  • Need for High quality structured data: Communication networks are very different animals from general human-computer interaction data sets, in that a large number of interactions between systems use structured data. However, due to the complexity of network systems and differences in vendor implementation, the degree of standardization of these structured data is currently very low, causing them to be divided into "information islands" that cannot be uniformly "interpreted" . The data in each "island" across the systems There is a lack of "standard bridge" to establish correlation between them, and it cannot be used as effective input to incubate modern data-driven AI/ML applications.
  • AI trustworthiness: Different from traditional AI application scenarios, in order to meet carrier-level reliability requirements, communication network operation management needs to be strict, precise, and prudent. Although operators have introduced various automation methods to the greatest extent in the process of building autonomous networks, legal person organizations composed of human units are still responsible for ensuring the quality of communication services. In other words, AI is still assisting people and cannot replace people. Not only because the loss caused by the AI algorithm itself cannot be defined as the responsible party (the developer or the user of the algorithm), but also because the deep learning model based on the AI/ML algorithm itself is based on the mathematical characteristics of statistics, resulting in uncertainty in its behavior, its credibility is difficult to guarantee.
  • Non-economic margin cost: According to the analysis and research of the previous white paper [the reference to the previous white paper], there are a large number of potential AI technology application scenarios in communication networks, but to independently build a customized data-driven AI/ML model for each specific scenario, would be uneconomical and hence unsustainable for both the R&D stage and the operation stage. How to build an effective business model between basic computing power providers, general AI/ML capability providers and algorithm application consumers in specific communication network scenarios will be the prerequisite for its effective application in the field of autonomous networks.
  • Contextual data sets 
  • l  TBA

3.1 traditional CSP pain points - Beth Cohen 

  • Operational Efficiency –   reduce costs and errors, potentially increase margins
  • Network Automation - reduce expenditure

...

  • sets: 

Emerging Opportunities Sandeep Panesar 

Converged Infrastructure for the Era of AI:The Telecoms have been working on converged infrastructure for a while.  Voice over IP has long been an industry standard, but there is far more than can be done to drive even more efficiencies in network and infrastructure convergence.

Converged infrastructure is needed to support the growth and sustainability of AI. This particularly refers to a single solution designed to integrate compute, storage, networking, and virtualization. We are trending are data volumes that will move beyond hyperscale data, and with such massive data processing requirements the intensity will be enormous. The demands on existing infrastructure will be heavy. Bringing everything together to work in concert will be key, to maintain and grow the demand for resources. This means that all training and inference tasks will need to be faster. In order to do that the components will need to work together efficiently and the network will play an important part in linking specialized hardware accelerators like GPUs (Graphics Processing Units) or TPUs (Tensor Processing Units) to accelerate AI workloads beyond what is currently capable. Converged infrastructure solutions will lead to the ability to deploy AI models faster, iterate them more efficiently, and extract insights with greater speed. This will pave the way for the next generation of AI.

...

Data monetization must ensure data privacy, security and compliance with any existing regulations, and ensure that there is a trust that can be undeniable for those who provide their data under such regulations. This requires clear policies and security procedures to ensure safety at all times. This is a new strategic business opportunity for organizations looking to monetize anonymous data and leverage it for increased business efficiency and to determine product direction and determine new go to market strategies.

...

Intelligent Networking and AI Projects and Research

1.5 page

  • l  Radio Access Network ChangJin Wang 
    • The intelligent evolution of wireless access networks is currently in a phase of rapid evolution and continuous innovation. In June 2022, 3GPP announced the freezing of R17 and described the process diagram of an intelligent RAN in TR37.817, including data collection, model training, model inference, and execution modules, which together form the infrastructure of an intelligent RAN. This promotes the rapid implementation and deployment of 5G RAN intelligence and provides support for intelligent scenarios such as energy saving, load balancing, and mobility optimization.
    • AI and Machine Learning Drive 5G RAN Intelligence

...

  • l  Core Network Hui Deng ChangJin Wang 
    • Mobile core network is kind of central  or brain of mobile communication, it is the only and largest part which experienced the transformation from legacy proprietary hardware into telco cloud native, almost 100% of today mobile core network has been deployed based on telco cloud architecture by NFV technologies. It mostly consisted of packet core network such as 5GC and UPF which is responsible for packet forwarding, IMS which help operators multimedia communications such as voice, message and video communication of telecommunication operators, and the management of core network including telco cloud infrastructure and 5G network functionalities.  F.or those 3 parts of AI evolvement,
    • Network Intelligence Enables Experience Monetization and Differentiated Operations

      For a long period of time, operators have strived to realize traffic monetization on MBB(Mobile Broadband) networks. However, there are three technical gaps: not assessable user experience, no dynamic optimization, and no-closed-loop operations. To bridge these gaps,  there is strong requirement for Intelligent Personalized Experience solution, aiming to help operators add experience privileges to service packages and better monetize differentiated experiences. In the industry, the user plane on the mobile core network usually processes and forwards one service flow using one vCPU. As heavy-traffic services increase, such as 2K or 4K HD video and live streaming, microbursts and elephant flows frequently occur. It is, therefore, more likely that a vCPU will become overloaded, causing packet loss. To address this issue, Intelligent AI supported 5G core network can deliver ubiquitous 10 Gbps superior experiences.

    • Service Intelligence Expands the Profitability of Calling Services

      In 2023, New Calling was put into commercial use based on 3GPP specification, it could enhanced intelligence and data channel (3GPP specification)-based interaction capabilities; it is taking user to a multi-modal communication era and helping operators reconstruct their service layout. In addition, 3GPP architecture allow users to control digital avatars through voice during calls, delivering a more personalized calling experience. An enterprise can also customize their own avatar as an enterprise ambassador to promote their branding.

    • O&M Intelligence Achieves High Network Stability and Efficiency

      Empowered by the multi-modal large model, the Digital Assistant & Digital Expert (DAE) based AI technology could reduces O&M workload and improves O&M efficiency. It reshapes cloud-based O&M from "experts+tools" to intelligence-centric "DAE+manual assistance". With DAE, 80% of telecommunication operators trouble tickets can be automatically processed, which is much more efficient than 100% manual processing as it used to be. DAE also enables intent-driven O&M, avoiding manual decision-making. Before, it usually took over five years to cultivate experts in a single domain, however, the multi-modal large model is now able to be trained and updated within just weeks.

  • l  Bearer & DC Network - Cisco?
  • l  Cross domain network AI platform Lingli Deng //moved to 4.3.1
  • l  Telecom Cloud Hui Deng 

...

issues solved/mitigated Lingli Deng Andrei Agapi 

  • The advent of transformer models and attention mechanisms [][], and the sudden popularity of ChatGPT [], LLMs, transfer learning and foundation models in the NLP domain have all sparked vivid discussions and efforts to apply generative models in many other domains [].

    Interestingly, all of: word embeddings [], sequence models such as LSTMs [] and GRUs [], attention mechanisms [], transformer models [] and pretrained LLMs [][] have long been around before the launch of the ChatGPT tool in late 2022. Pretrained transformers like BERT[] in particular (especially transformer-encoder models) were very popular and widely used in NLP for tasks like sentiment analysis, text classification [], extractive question answering [] etc, long before ChatGPT made chatbots and decoder-based generative models go viral.

  • That said, there has clearly been a spectacular explosion of academic research, commercial activity and ecosystems that have emerged since ChatGPT came out, in the area of both open [][][] and closed source [][][] LLM foundation models, related software, services and training datasets.

    Beyond typical chatbot-style applications, LLMs have been extended to generate code [][], solve Math problems (stated either formally or informally) [], pass science exams [][], or act as incipient "AGI"-style agents for different tasks, including advising on investment strategies, or setting up a small business [][]. Recent advancements to the basic LLM text generation model include instruction finetuning [], retrieval augmented generation using external vector stores [][], using external tools such as web search [], external knowledge databases or other APIs for grounding models [], code interpreters, calculators and formal reasoning tools [][]. Beyond LLMs and NLP, transformers have also been used to handle non-textual data, such as images [], sound [] and arbitrary sequence data [].

  • A natural question arises on how the the power of LLMs can be harnessed for problems and applications related to Intelligent Networking, network automation and for operating and optimizing telecommunication networks in general, at any level of the network stack.

    Datasets encountered in telco-related applications have a few particularities. For one, data one might encounter ranges from fully structured (e.g. code, scripts, configuration, or time series KPIs), to semi-structured (syslogs, design templates etc), to unstructured data (design documents and specifications, Wikis, Github issues, emails, chatbot conversations).

  • Another issue is domain adaptation. Language encountered in telco datasets can be very domain specific (including CLI commands and CLI output, formatted text, network slang and abbreviations, syslogs, RFC language, network device specifications etc). Off-the-shelf performance of LLM models strongly depends on whether those LLMs have actually seen that particular type of data during training (this is true for both generative LLMs and embedding models). There exist several approaches to achieve domain adaptation and downstream task adaptation of LLM models. In general these either rely on 1) In-context-learning, prompting and retrieval augmentation techniques; 2) Finetuning the models; or 3) Hybrid approaches. For finetuning LLMs, unlike for regular neural network models, several specialized techniques exist in the general area of PEFT (Parameter Efficient Fine Tuning), allowing one to only finetune a very small percentage of the many billions of parameters of a typical LLM. In general, the best techniques to achive domain adaptation for an LLM will heavily depend on: 1) the kind of data we have and how much domain data we have available, 2) the downstream task, and 3) the foundation model we start from. In addition to general domain adaptation, many telcos will have the issue of multilingual datasets, where a mix of languages (typically English + something else) will exist in the data (syslogs, wikis, tickets, chat conversations etc). While many options exist for both generative LLMs [] and text embedding models [], not many foundation models have seen enough non-English data in training, thus options in foundation model choice are definitely restricted for operators working on non-English data.

  • In conclusion, while foundation models and transfer learning have been shown to work very well on general human language when pretraining is done on large corpuses of human text (such as Wikipedia, or the Pile[]), it remains an open question to be answered whether domain adaptation and downstream task adaptation work equally well on the kinds of domain-specific, semi-structured, mixed modality datasets we can find in telco networks. To enable this, telcos should very likely focus on standardization and data governance efforts, such as standardized and unified data collection policies and high quality structured data, as discussed earlier in this whitepaper.

  • Deploying large models such as LLMs in production, especially at scale, also raises several other issues in terms of: 1) Performance, scalability and cost of inference, especially when using large context windows (most transformers scale poorly with context size); 2) Deployment of models in the cloud, on premise, multi-cloud, or hybrid; 3) Issues pertaining to privacy and security of the data for each particular application; 4) Issues common to many other ML/AI applications, such as ML-Ops, continuous model validation and continuous re-training.

  • high quality structured data Large language models can be used to understand large amounts of unstructured operation and maintenance data (for example, system logs, operation and maintenance work orders, operation guides, company documents, etc., which are traditionally used in human-computer interaction or human-to-human collaboration scenarios), from which effective knowledge is extracted to provide guidance for further automatic/intelligent operation and maintenance, thereby effectively expanding the scope of the application of autonomous mechanism.
  • AI trustworthiness moved to 4.3.2
  • non-economic margin cost Although equipment manufacturers can provide many domain AI solutions for professional networks/single-point equipment, these solutions are limited in "field of view" and cannot solve problems that require a "global view" such as end-to-end service quality assurance and rapid response to faults. . Operators can aggregate management and maintenance data in various network domains by building a unified data sharing platform, and based on this, further provide a unified computing resource pool, basic AI algorithms and inference platform (i.e. cross-domain AI platform) for various scenario-specific AI for end-to-end scenarios and intra-domain scenarios. Applied reasoning platform.
  • TBA

...

issues remained/added/aggravated Lingli Deng 

  • AI trustworthiness
  • non-economic margin cost Compared with traditional data-driven dedicated AI/ML models in specific scenarios, the R&D and operation of large language models has higher requirements for pre-training data scale, training computing power cluster scale, fine-tuning engineering and other resource requirements, energy consumption, management and maintenance, etc. It is bound to become a playground for the few. How to open up the R&D, application, operation and maintenance chain of large models for the communications network industry so that it can serve the general public may become a stepping stone for operators to realize high-end autonomous networks.
  • TBA

...

Network LLM: the game changer?

Jason Hunt (at least on how foundation models can be applied to network data) Andrei Agapi 

1 page

...

LF Data (Thoth)- Sandeep Panesar Beth Cohen 

  • Thoth project - Telco Data Anonymizer Project

...

AI has the potential in creating value in terms of enhanced workload availability and improved performance and efficiency for NFV use cases. This work aims to build machine-Learning models and Tools that can be used by Telcos (typically by the operations team in Telcos). Each of these models aims to solve a single problem within a particular category. For example, the first category we have chosen is Failure prediction, and we aim to create 6 models - failure prediction of VMs. Containers, Nodes,  Network-Links, Applications, and middleware services. This project also aims to define a set of data models for each of the decision-making problems, that will help both providers and consumers of the data to collaborate. 

...

Autonomous Network Beth Cohen 

...

LLM & GenAI Sandeep Panesar 

LLM (Large Language Models)

...

The two combined open the question as to what should Gen AI be used for, and more importantly how is it made distinguishable from human work. There are many regulatory bodies looking at solutions around identification of decisions and what content has been generated to solve a particular problem and solution. The foundation of this combination is to ensure security, safety, mitigate biases, and identify what changes were illustrated and acted upon by Gen AI, and what changes were not. Gen AI requires an organizational framework for each organization to know and ensure these factors.

...

How could Open Source Help? Ranny Haiby 

2-3 pages

When considering the role of open source software in addressing the challenges of Network AI it is important to understand the current landscape of projects and initiatives and how they came into existence. Several such initiatives have already laid down the ground work for building Network AI solutions, or are actively working on creating them. Building on these foundations, it is possible to envision what role open source software will play in unleashing the power of AI for the future generations of networks. Some of the required technologies required of Network AI are unique to the Networking industry, and will have to be addressed by the existing OSS projects on the landscape, or by creation of additional ones. Some of the other pieces of technology are more generic, and will come from the broad landscape of AI OSS. Here is a rough outline of the different layers of Networking AI and the source of the required technology:

...

Related Open Source Landscape

1-2 pages

  • Network communities

...

Experience with OSS in other domains shows that whenever there is an OSS technology that powers commercial products or offerings, there is a need to validate the products to make sure they are properly using the OSS technology and are ready to serve the end users in a predictable manner. Such validation/verification programs have existed for a while as part of OSS ecosystems. They are often created and maintained by the same OSS community that develops the OSS projects themselves. The Cloud Native Computing Foundation (CNCF) has a successful "Certified Kubernetes" program that helps vendors and end users ensure that Kubernetes distributions provide all the necessary APIs and functionality. A similar approach should be applied to any OSS Networking AI projects. End users should have a certain level of confidence, knowing that the OSS based AI Networking solution they use will behave as expected. 

...

Common Vision: intelligence plane for XG Ranny Haiby Muddasar Ahmed 

1 page

In the dynamic realm of communication technologies, the fusion of artificial intelligence (AI) with networks promises to redefine connectivity, ushering in an era of unprecedented intelligence, efficiency, and adaptability. As we embark on the journey towards 6G and embrace the vision outlined by the International Telecommunication Union (ITU) for IMT-2030, it becomes clear that AI will play a pivotal role in reshaping network operations.

...

In conclusion, the future of networks in the era of 6G and beyond hinges on the transformative power of AI, fueled by open-source collaboration. By embracing AI-driven intelligence, networks can enhance situational awareness, performance, and capacity management, while enabling quick reactions to undesired states. As we navigate this AI-powered future, the convergence of technological innovation and open collaboration holds the key to unlocking boundless opportunities for progress and prosperity in the telecommunications landscape.

...

Call for Action Beth Cohen 

0.25 page

The future of OSS Networking AI is in the hands of the individuals and organizations who are already contributing to projects and initiatives, and those who will join them. If you are involved in building and operating networks, developing network technology or consuming network services, you should probably consider getting involved. Engaging with OSS communities is a way to shape the future of Networking. Your contribution could be small or large, and does not necessarily involve writing code. Some of the ways Ito contribute include:

...