Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

0.5 page Lingli Deng Andrei Agapi 

  • Need for High Quality Structured Data: Communication networks are very different from general human-computer data sets, in that a large number of interactions between systems use structured data. However, due to the complexity of network systems and differences in vendor implementations, the degree of standardization of these structured data is currently very low, causing "information islands" that cannot be uniformly "interpreted" across the systems There is a lack of "standard bridge" to establish correlation between them, and it cannot be used as effective input to incubate modern data-driven AI/ML applications.
  • AI Trustworthiness: In order to meet carrier-level reliability requirements, network operation management needs to be strict, precise, and prudent. Although operators have introduced various automation methods in the process of building autonomous networks, organizations with real people are still responsible for ensuring the quality of communication and network services. In other words, AI is still assisting people and is not advanced enough to replace people. Not only because the loss caused by the AI algorithm itself cannot be defined as the responsible party (the developer or the user of the algorithm), but also because the deep learning models based on the AI/ML algorithms themselves are based on mathematical statistical characteristics, resulting in behavior uncertainty leading to the credibility of the results being difficult to determine.
  • Uneconomic Margin Costs: According to the analysis and research in the Intelligent Networking, AI and Machine Learning While Paper , there are a large number of potential network AI technology application scenarios, but to independently build a customized data-driven AI/ML model for each specific scenario, is uneconomical and hence unsustainable, both for research and operations. Determining how to build an effective business model between basic computing power providers, general AI/ML capability providers and algorithm application consumers is an essential prerequisite for its effective application in the field.  
  • Unsupportable Research Models:  Compared with traditional data-driven dedicated AI/ML models in specific scenarios, the R&D and operation of large language models have higher requirements for pre-training data scales, training computing power cluster scale, fine-tuning engineering and other resource requirements, energy consumption, management and maintenance, etc.  Is it possible to build a shared research and development application, operation and maintenance model for the telecommunications industry so that it can become a stepping stone for operators to realize high-end autonomous networks.
  • Contextual data sets: Another hurtle that is often overlooked is the need for the networking data sets to be understood in context.  What that means is that networks need to work with all the layers of the IT stack, including but not limited to:
    • Applications: Making sure that customer applications perform as expected withe underlying network
    • Security: More important than ever as attack vector expand and customers expect the networks to be protected
    • Interoperability: The data sets must support transparent interoperability with other operators, cloud providers and other systems in the telecom ecosystem
    • OSS/BSS Systems: The operational and business applications that support network services

Emerging Opportunities

Sandeep Panesar 

The Telecoms have been working on converged infrastructure for a while.  Voice over IP has long been an industry standard, but there is far more than can be done to drive even more efficiencies in network and infrastructure convergence.

Converged infrastructure is needed to support the growth and sustainability of AI. This is particularly important in the need for a single solution designed to integrate compute, storage, networking, and virtualization. Data volumes are trending to grow beyond hyperscale data, and with such massive data processing requirements the ability to execute is critical.  The demands on existing infrastructure are already heavy., so bringing everything together to work in concert will be key, to maintain and grow the demand for resources.  In order to do that the components will need to work together efficiently and the network will play an important role in linking specialized hardware accelerators like GPUs (Graphics Processing Units) or TPUs (Tensor Processing Units) to accelerate AI workloads beyond current capability. Converged infrastructure solutions will lead to the ability to deploy AI models faster, iterate them more efficiently, and extract insights with greater speed. This can pave the way for the next generation of AI.

Converged Service and integrated solutions that combine AI with traditional services has the potential to deliver enhanced services to end customers, but more importantly these services need to leverage AI-driven insights, automation, and personalization to optimize user experience, improve efficiency, and drive innovation across industries. There are many existing industry use cases for this already which include healthcare, legal, retail, and incidence tracking. The analytics delivered by a converged service provide automated insights and tools that can provide real-time tracking, response and remediation.

Business Innovation - New Revenue Streams: Data monetization encompasses various strategies, including selling the raw data, offering data analytics services, and developing data-driven products or solutions to customers. Organizations can monetize their data by identifying valuable insights, patterns, or trends hidden within their datasets that no group of human resources can possibly identify quickly. These insights can then be used to create new products and services that will better serve customers and organizations. This is a new strategic business opportunity for organizations looking to monetize anonymous data and leverage it for increased business efficiency and to determine product direction and determine new go to market strategies.

Data Privacy and Security:  The ability to monetization the data comes with a big caveat, which is that the use of customer data must be handles with care to ensure data privacy, security and regulatory compliance. This requires clear policies and security procedures to ensure anonymity, safety and privacy at all times. 

Network Intelligence within AI Research Context

...

  • The advent of transformer models and attention mechanisms [][], and the sudden popularity of ChatGPT [], LLMs, transfer learning and foundation models in the NLP domain have all sparked vivid discussions and efforts to apply generative models in many other domains [].

    Interestingly, all of: word embeddings [], sequence models such as LSTMs [] and GRUs [], attention mechanisms [], transformer models [] and pretrained LLMs [][] have long been around before the launch of the ChatGPT tool in late 2022. Pretrained transformers like BERT[] in particular (especially transformer-encoder models) were very popular and widely used in NLP for tasks like sentiment analysis, text classification [], extractive question answering [] etc, long before ChatGPT made chatbots and decoder-based generative models go viral.

  • That said, there has clearly been a spectacular explosion of academic research, commercial activity and ecosystems that have emerged since ChatGPT came out, in the area of both open [][][] and closed source [][][] LLM foundation models, related software, services and training datasets.

    Beyond typical chatbot-style applications, LLMs have been extended to generate code [][], solve Math problems (stated either formally or informally) [], pass science exams [][], or act as incipient "AGI"-style agents for different tasks, including advising on investment strategies, or setting up a small business [][]. Recent advancements to the basic LLM text generation model include instruction finetuning [], retrieval augmented generation using external vector stores [][], using external tools such as web search [], external knowledge databases or other APIs for grounding models [], code interpreters, calculators and formal reasoning tools [][]. Beyond LLMs and NLP, transformers have also been used to handle non-textual data, such as images [], sound [] and arbitrary sequence data [].

  • A natural question arises on how the the power of LLMs can be harnessed for problems and applications related to Intelligent Networking, network automation and for operating and optimizing telecommunication networks in general, at any level of the network stack.

    Datasets encountered in telco-related applications have a few particularities. For one, data one might encounter ranges from fully structured (e.g. code, scripts, configuration, or time series KPIs), to semi-structured (syslogs, design templates etc), to unstructured data (design documents and specifications, Wikis, Github issues, emails, chatbot conversations).

  • Another issue is domain adaptation. Language encountered in telco datasets can be very domain specific (including CLI commands and CLI output, formatted text, network slang and abbreviations, syslogs, RFC language, network device specifications etc). Off-the-shelf performance of LLM models strongly depends on whether those LLMs have actually seen that particular type of data during training (this is true for both generative LLMs and embedding models). There exist several approaches to achieve domain adaptation and downstream task adaptation of LLM models. In general these either rely on 1) In-context-learning, prompting and retrieval augmentation techniques; 2) Finetuning the models; or 3) Hybrid approaches. For finetuning LLMs, unlike for regular neural network models, several specialized techniques exist in the general area of PEFT (Parameter Efficient Fine Tuning), allowing one to only finetune a very small percentage of the many billions of parameters of a typical LLM. In general, the best techniques to achive domain adaptation for an LLM will heavily depend on: 1) the kind of data we have and how much domain data we have available, 2) the downstream task, and 3) the foundation model we start from. In addition to general domain adaptation, many telcos will have the issue of multilingual datasets, where a mix of languages (typically English + something else) will exist in the data (syslogs, wikis, tickets, chat conversations etc). While many options exist for both generative LLMs [] and text embedding models [], not many foundation models have seen enough non-English data in training, thus options in foundation model choice are definitely restricted for operators working on non-English data.

  • In conclusion, while foundation models and transfer learning have been shown to work very well on general human language when pretraining is done on large corpuses of human text (such as Wikipedia, or the Pile[]), it remains an open question to be answered whether domain adaptation and downstream task adaptation work equally well on the kinds of domain-specific, semi-structured, mixed modality datasets we can find in telco networks. To enable this, telcos should very likely focus on standardization and data governance efforts, such as standardized and unified data collection policies and high quality structured data, as discussed earlier in this whitepaper.

  • Deploying large models such as LLMs in production, especially at scale, also raises several other issues in terms of: 1) Performance, scalability and cost of inference, especially when using large context windows (most transformers scale poorly with context size); 2) Deployment of models in the cloud, on premise, multi-cloud, or hybrid; 3) Issues pertaining to privacy and security of the data for each particular application; 4) Issues common to many other ML/AI applications, such as ML-Ops, continuous model validation and continuous re-training.

  • high quality structured data Large language models can be used to understand large amounts of unstructured operation and maintenance data (for example, system logs, operation and maintenance work orders, operation guides, company documents, etc., which are traditionally used in human-computer interaction or human-to-human collaboration scenarios), from which effective knowledge is extracted to provide guidance for further automatic/intelligent operation and maintenance, thereby effectively expanding the scope of the application of autonomous mechanism.
  • non-economic margin cost Although equipment manufacturers can provide many domain AI solutions for professional networks/single-point equipment, these solutions are limited in "field of view" and cannot solve problems that require a "global view" such as end-to-end service quality assurance and rapid response to faults. . Operators can aggregate management and maintenance data in various network domains by building a unified data sharing platform, and based on this, further provide a unified computing resource pool, basic AI algorithms and inference platform (i.e. cross-domain AI platform) for various scenario-specific AI for end-to-end scenarios and intra-domain scenarios. Applied reasoning platform. 

...

Deploying large models such as LLMs in production, especially at scale, also raises several other issues in terms of: 1) Performance, scalability and cost of inference, especially when using large context windows (most transformers scale poorly with context size); 2) Deployment of models in the cloud, on premise, multi-cloud, or hybrid; 3) Issues pertaining to privacy and security of the data for each particular application; 4) Issues common to many other ML/AI applications, such as ML-Ops, continuous model validation and continuous re-training.

  • Need for High Quality Structured Data: Communication networks are very different from general human-computer data sets, in that a large number of interactions between systems use structured data. However, due to the complexity of network systems and differences in vendor implementations, the degree of standardization of these structured data is currently very low, causing "information islands" that cannot be uniformly "interpreted" across the systems There is a lack of "standard bridge" to establish correlation between them, and it cannot be used as effective input to incubate modern data-driven AI/ML applications. Large language models can be used to understand large amounts of unstructured operation and maintenance data (for example, system logs, operation and maintenance work orders, operation guides, company documents, etc., which are traditionally used in human-computer interaction or human-to-human collaboration scenarios), from which effective knowledge is extracted to provide guidance for further automatic/intelligent operation and maintenance, thereby effectively expanding the scope of the application of autonomous mechanism.
  • AI Trustworthiness: In order to meet carrier-level reliability requirements, network operation management needs to be strict, precise, and prudent. Although operators have introduced various automation methods in the process of building autonomous networks, organizations with real people are still responsible for ensuring the quality of communication and network services. In other words, AI is still assisting people and is not advanced enough to replace people. Not only because the loss caused by the AI algorithm itself cannot be defined as the responsible party (the developer or the user of the algorithm), but also because the deep learning models based on the AI/ML algorithms themselves are based on mathematical statistical characteristics, resulting in behavior uncertainty leading to the credibility of the results being difficult to determine.
  • Uneconomic Margin Costs: According to the analysis and research in the Intelligent Networking, AI and Machine Learning While Paper , there are a large number of potential network AI technology application scenarios, but to independently build a customized data-driven AI/ML model for each specific scenario, is uneconomical and hence unsustainable, both for research and operations. Determining how to build an effective business model between basic computing power providers, general AI/ML capability providers and algorithm application consumers is an essential prerequisite for its effective application in the field.  
  • Unsupportable Research Models:  Compared with traditional data-driven dedicated AI/ML models in specific scenarios, the R&D and operation of large language models have higher requirements for pre-training data scales, training computing power cluster scale, fine-tuning engineering and other resource requirements, energy consumption, management and maintenance, etc.  Is it possible to build a shared research and development application, operation and maintenance model for the telecommunications industry so that it can become a stepping stone for operators to realize high-end autonomous networks.
  • Contextual data sets: Another hurtle that is often overlooked is the need for the networking data sets to be understood in context.  What that means is that networks need to work with all the layers of the IT stack, including but not limited to:
    • Applications: Making sure that customer applications perform as expected withe underlying network
    • Security: More important than ever as attack vector expand and customers expect the networks to be protected
    • Interoperability: The data sets must support transparent interoperability with other operators, cloud providers and other systems in the telecom ecosystem
    • OSS/BSS Systems: The operational and business applications that support network services
  • Community Unity and Standards: Although equipment manufacturers can provide many domain AI solutions for professional networks/single-point equipment, these solutions are limited in "field of view" and cannot solve problems that require a "global view" such as end-to-end service quality assurance and rapid response to faults. . Operators can aggregate management and maintenance data in various network domains by building a unified data sharing platform, and based on this, further provide a unified computing resource pool, basic AI algorithms and inference platform (i.e. cross-domain AI platform) for various scenario-specific AI for end-to-end scenarios and intra-domain scenarios. Applied reasoning platform. 

Emerging Opportunities

Sandeep Panesar 

The Telecoms have been working on converged infrastructure for a while.  Voice over IP has long been an industry standard, but there is far more than can be done to drive even more efficiencies in network and infrastructure convergence.

Converged infrastructure is needed to support the growth and sustainability of AI. This is particularly important in the need for a single solution designed to integrate compute, storage, networking, and virtualization. Data volumes are trending to grow beyond hyperscale data, and with such massive data processing requirements the ability to execute is critical.  The demands on existing infrastructure are already heavy., so bringing everything together to work in concert will be key, to maintain and grow the demand for resources.  In order to do that the components will need to work together efficiently and the network will play an important role in linking specialized hardware accelerators like GPUs (Graphics Processing Units) or TPUs (Tensor Processing Units) to accelerate AI workloads beyond current capability. Converged infrastructure solutions will lead to the ability to deploy AI models faster, iterate them more efficiently, and extract insights with greater speed. This can pave the way for the next generation of AI.

Converged Service and integrated solutions that combine AI with traditional services has the potential to deliver enhanced services to end customers, but more importantly these services need to leverage AI-driven insights, automation, and personalization to optimize user experience, improve efficiency, and drive innovation across industries. There are many existing industry use cases for this already which include healthcare, legal, retail, and incidence tracking. The analytics delivered by a converged service provide automated insights and tools that can provide real-time tracking, response and remediation.

Business Innovation - New Revenue Streams: Data monetization encompasses various strategies, including selling the raw data, offering data analytics services, and developing data-driven products or solutions to customers. Organizations can monetize their data by identifying valuable insights, patterns, or trends hidden within their datasets that no group of human resources can possibly identify quickly. These insights can then be used to create new products and services that will better serve customers and organizations. This is a new strategic business opportunity for organizations looking to monetize anonymous data and leverage it for increased business efficiency and to determine product direction and determine new go to market strategies.

Data Privacy and Security:  The ability to monetization the data comes with a big caveat, which is that the use of customer data must be handles with care to ensure data privacy, security and regulatory compliance. This requires clear policies and security procedures to ensure anonymity, safety and privacy at all times. 

General AI in Context

Lingli DengAndrei Agapi Sandeep Panesar 

LLM (Large Language Models)

This breakthrough in AI research is characterized by vast size, extensive training data, and the ability to generate human-like text. These models are trained on large datasets from sources particular to the subject of research desired. LLMs have changed the way natural language processing tasks are interpreted and include: text generation, translation, summarization, and question answering. 

Generative AI (Gen AI)

Gen AI is a much broader category of artificial intelligence systems capable of generating new content, ideas, or solutions autonomously based on a human text based input. This includes LLMs as resource data for content generation. As such Gen AI is able to produce creatively as  human would do so in real life, but in a fraction of the time. Content creation for web sites, images, videos, and music are a few of the capabilities of Gen AI. The rise of Gen AI provides numerous business cases for use from creating corporate logos, to corporate videos, to saleable products to end-consumers and businesses, to creating visual network maps on the basis of the datasets being accessed. Further, even being able to provide optimized maps for implementation to improve networking either autonomous or with human intervention.

The two combined open the question as to what should Gen AI be used for, and more importantly how is it made distinguishable from human work. There are many regulatory bodies looking at solutions around identification of decisions and what content has been generated to solve a particular problem and solution. The foundation of this combination is to ensure security, safety, mitigate biases, and identify what changes were illustrated and acted upon by Gen AI, and what changes were not. Gen AI requires an organizational framework for each organization to know and ensure these factors.

The advent of transformer models and attention mechanisms and the sudden popularity of ChatGPT and other LLMs, transfer learning and foundation models in the NLP domain have all sparked vivid discussions and efforts to apply generative models in many other domains. Interestingly, let us not forget that word embeddings, sequence models such as LSTMs and GRUs, attention mechanisms, transformer models and pretrained LLMs have been around before the launch of the ChatGPT tool in late 2022. Pretrained transformers like BERT in particular (especially transformer-encoder models) were widely used in NLP for tasks like sentiment analysis, text classification, extractive question answering etc. long before ChatGPT made chatbots and decoder-based generative models go viral.  That said, there has clearly been a spectacular explosion of academic research, commercial activity and ecosystems that have emerged since ChatGPT came out, in the area of both open and closed source LLM foundation models, related software, services and training datasets.

Beyond typical chatbot-style applications, LLMs have been extended to generate code, solve math problems (stated either formally or informally), pass science exams, or act as incipient "AGI"-style agents for different tasks, including advising on investment strategies, or setting up a small business. Recent advancements to the basic LLM text generation model include instruction finetuning, retrieval augmented generation using external vector stores, using external tools such as web search, external knowledge databases or other APIs for grounding models, code interpreters, calculators and formal reasoning tools. Beyond LLMs and NLP, transformers have also been used to handle non-textual data, such as images, sound and arbitrary sequence data.

Intelligent Networking Differences

A natural question arises on how the the power of LLMs can be harnessed for problems and applications related to Intelligent Networking, network automation and for operating and optimizing telecommunication networks in general, at any level of the network stack.

Datasets in telco-related applications have a few particularities. For one, the data one might encounter ranges from fully structured (e.g. code, scripts, configuration, or time series KPIs), to semi-structured (syslogs, design templates etc.), to unstructured data (design documents and specifications, Wikis, Github issues, emails, chatbot conversations).

Another issue is domain adaptation. Language encountered in telco datasets can be very domain specific (including CLI commands and CLI output, formatted text, network slang and abbreviations, syslogs, RFC language, network device specifications etc.). Off-the-shelf performance of LLM models strongly depends on whether those LLMs have actually seen that particular type of data during training (this is true for both generative LLMs and embedding models). There are several approaches to achieve domain adaptation and downstream task adaptation of LLM models. In general these either rely on:

  1. In-context-learning, prompting and retrieval augmentation techniques;
  2. Finetuning the models
  3. Hybrid approaches.

For finetuning LLMs, unlike for regular neural network models, several specialized techniques exist in the general area of PEFT (Parameter Efficient Fine Tuning), allowing one to only finetune a very small percentage of the many billions of parameters of a typical LLM. In general, the best techniques to achieve domain adaptation for an LLM will heavily depend on:

  1. The type of data and how much domain data is available,
  2. The specific downstream task
  3. The initial foundation model

In addition to general domain adaptation, many telcos will have the issue of multilingual datasets, where a mix of languages (typically English + something else) will exist in the data (syslogs, wikis, tickets, chat conversations etc.). While many options exist for both generative LLMs and text embedding models, not many foundation models have seen enough non-English data in training, thus options in foundation model choice are definitely restricted for operators working on non-English data.

In conclusion, while foundation models and transfer learning have been shown to work very well on general human language when pretraining is done on large corpuses of human text (such as Wikipedia, or the Pile), it remains an open question to be answered whether domain adaptation and downstream task adaptation work equally well on the kinds of domain-specific, semi-structured, mixed modality datasets we can find in telecom networks. To enable this, telecoms should focus on standardization and data governance efforts, such as standardized and unified data collection policies and high quality structured data. 

Intelligent Networking Projects and Research

1.5 page

Radio Access Network

ChangJin Wang 

The intelligent evolution of wireless access networks is currently in a phase of rapid evolution and continuous innovation. In June 2022, 3GPP announced the freezing of R17 and described the process diagram of an intelligent RAN in TR37.817, including data collection, model training, model inference, and execution modules, which together form the infrastructure of an intelligent RAN. This promotes the rapid implementation and deployment of 5G RAN intelligence and provides support for intelligent scenarios such as energy saving, load balancing, and mobility optimization.

  • AI and Machine Learning Drive 5G RAN Intelligence

Artificial intelligence and machine learning technologies are playing an increasingly important role in 5G RAN intelligence. The application of these technologies enables the network to learn autonomously, self-optimize, and self-repair, thereby improving network stability, reliability, and performance. For example, by using machine learning algorithms to predict and schedule network traffic, more efficient resource allocation and load balancing can be achieved. By leveraging AI technologies for automatic network fault detection and repair, operation and maintenance costs can be greatly reduced while improving user experience. The intelligence of 5G wireless access networks also provides broad space for various vertical industry applications. For instance, in intelligent manufacturing, 5G can enable real-time communication and data transmission between devices, improving production efficiency and product quality. In smart cities, 5G can provide high-definition video surveillance, intelligent transportation management, and other services to enhance urban governance. Additionally, 5G has played a significant role in remote healthcare, online education, and other fields.

  • Challenges Facing 5G RAN Intelligence Industrialization

However, despite the remarkable progress made in the 5G wireless access network intelligence industry, some challenges and issues remain to be addressed. For example, network security and data privacy protection are pressing issues that require effective measures to be implemented. The energy consumption issue of 5G networks also needs attention, necessitating technological innovations and energy-saving measures to reduce energy consumption. In the future, continuous efforts should be made in technological innovation, market application, and other aspects to promote the sustainable and healthy development of the 5G wireless access network intelligence industry.

Core Network

Hui Deng ChangJin Wang 

Mobile core network is kind of central  or brain of mobile communication, it is the only and largest part which experienced the transformation from legacy proprietary hardware into telco cloud native, almost 100% of today mobile core network has been deployed based on telco cloud architecture by NFV technologies. It mostly consisted of packet core network such as 5GC and UPF which is responsible for packet forwarding, IMS which help operators multimedia communications such as voice, message and video communication of telecommunication operators, and the management of core network including telco cloud infrastructure and 5G network functionalities.  F.or those 3 parts of AI evolvement,

  • Network Intelligence Enables Experience Monetization and Differentiated Operations

    For a long

  • period of
  • time, operators have strived to realize traffic monetization on MBB(Mobile Broadband) networks. However, there are three technical gaps: not assessable user experience, no dynamic optimization, and no-closed-loop operations. To bridge these gaps,  there is strong requirement for Intelligent Personalized Experience solution, aiming to help operators add experience privileges to service packages and better monetize differentiated experiences. In the industry, the user plane on the mobile core network usually processes and forwards one service flow using one vCPU. As heavy-traffic services increase, such as 2K or 4K HD video and live streaming, microbursts and

  • elephant
  • extremely large network flows frequently occur. It is, therefore, more likely that a vCPU will become overloaded, causing packet loss. To address this issue, Intelligent AI supported 5G core network can deliver ubiquitous 10 Gbps superior experiences.

  • Service Intelligence Expands the Profitability of Calling Services

    In 2023, New Calling was put into commercial use based on 3GPP specification, it could enhanced intelligence and data channel (3GPP specification)-based interaction capabilities; it is taking user to a multi-modal communication era and helping operators reconstruct their service layout. In addition, 3GPP architecture allow users to control digital avatars through voice during calls, delivering a more personalized calling experience. An enterprise can also customize their own avatar as an enterprise ambassador to promote their branding.

  • O&M Intelligence Achieves High Network Stability and Efficiency

    Empowered by

  • the multi-modal large model, the Digital Assistant & Digital Expert (DAE) based AI technology could reduces O&M workload and improves O&M efficiency. It reshapes cloud-based O&M from "experts+tools" to intelligence-centric "DAE+manual assistance". With DAE, 80% of telecommunication operators trouble tickets can be automatically processed, which is much more efficient than 100% manual processing as it used to be. DAE also enables intent-driven O&M, avoiding manual decision-making. Before, it usually took over five years to cultivate experts in a single domain, however, the multi-modal large model is now able to be trained and updated within just weeks.

Network LLM: The game changer?

Jason Hunt (at least on how foundation models can be applied to network data) Andrei Agapi 

1 page

Thoth project - Telco Data Anonymizer Project

Sandeep Panesar Beth Cohen 

The Thoth project, which is a sub project under the Anuket infrastructure project, has recently focused on a major challenge to the adoption of intelligent networks, the lack of a common data set or an agreement on a common understanding of the data set that is needed.  AI has the potential in creating value in terms of enhanced workload availability and improved performance and efficiency for NFV use cases. This work aims to build machine-Learning models and Tools that can be used by Telcos (typically by the operations team in Telcos). Each of these models aims to solve a single problem within a particular category. For example, the first category we have chosen is Failure prediction, and we aim to create 6 models - failure prediction of VMs. Containers, Nodes,  Network-Links, Applications, and middleware services. This project also aims to define a set of data models for each of the decision-making problems, that will help both providers and consumers of the data to collaborate. 

LLM & GenAI

Sandeep Panesar 

LLM (Large Language Models)

This breakthrough in AI research is characterized by vast size, extensive training data, and the ability to generate human-like text. These models are trained on large datasets from sources particular to the subject of research desired. LLMs have changed the way natural language processing tasks are interpreted and include: text generation, translation, summarization, and question answering. 

Generative AI (Gen AI)

Gen AI is a much broader category of artificial intelligence systems capable of generating new content, ideas, or solutions autonomously based on a human text based input. This includes LLMs as resource data for content generation. As such Gen AI is able to produce creatively as  human would do so in real life, but in a fraction of the time. Content creation for web sites, images, videos, and music are a few of the capabilities of Gen AI. The rise of Gen AI provides numerous business cases for use from creating corporate logos, to corporate videos, to saleable products to end-consumers and businesses, to creating visual network maps on the basis of the datasets being accessed. Further, even being able to provide optimized maps for implementation to improve networking either autonomous or with human intervention.

  • the multi-modal large model, the Digital Assistant & Digital Expert (DAE) based AI technology could reduces O&M workload and improves O&M efficiency. It reshapes cloud-based O&M from "experts+tools" to intelligence-centric "DAE+manual assistance". With DAE, 80% of telecommunication operators trouble tickets can be automatically processed, which is much more efficient than 100% manual processing it is for the most part today. DAE also enables intent-driven O&M, avoiding manual decision-making. Before, it usually took over five years to cultivate experts in a single domain, however, the multi-modal large model is now able to be trained and updated within just weeks.

Thoth project - Telco Data Anonymizer Project

Sandeep Panesar Beth Cohen 

The Thoth project, which is a sub project under the Anuket infrastructure project, has recently focused on a major challenge to the adoption of intelligent networks, the lack of a common data set or an agreement on a common understanding of the data set that is needed.  AI has the potential in creating value in terms of enhanced workload availability and improved performance and efficiency for NFV use cases. This work aims to build machine-Learning models and Tools that can be used by Telcos (typically by the operations team in Telcos). Each of these models aims to solve a single problem within a particular category. For example, the first category we have chosen is Failure prediction, and we aim to create 6 models - failure prediction of VMs. Containers, Nodes,  Network-Links, Applications, and middleware services. This project also aims to define a set of data models for each of the decision-making problems, that will help both providers and consumers of the data to collaborate. The two combined open the question as to what should Gen AI be used for, and more importantly how is it made distinguishable from human work. There are many regulatory bodies looking at solutions around identification of decisions and what content has been generated to solve a particular problem and solution. The foundation of this combination is to ensure security, safety, mitigate biases, and identify what changes were illustrated and acted upon by Gen AI, and what changes were not. Gen AI requires an organizational framework for each organization to know and ensure these factors.

How could Open Source Help?

...

In conclusion, the future of networks in the era of 6G and beyond hinges on the transformative power of AI, fueled by open-source collaboration. By embracing AI-driven intelligence, networks can enhance situational awareness, performance, and capacity management, while enabling quick reactions to undesired states. As we navigate this AI-powered future, the convergence of technological innovation and open collaboration holds the key to unlocking boundless opportunities for progress and prosperity in the telecommunications landscape.


Network LLM: The game changer?

Jason Hunt (at least on how foundation models can be applied to network data) Andrei Agapi 

1 page