Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

ONAP is enhanced with numerous features from release to release. Each release is named after a city. Chaker Al-Hakim - Do we really want to include this table here? It will become obsolete when Frankfurt is out. Plus none of the other projects show their relases in this section. Should we link to this insteadA list of past and current releases may be found here:

https://wiki.onap.org/display/DW/Long+Term+Roadmap

...

3.3 OpenDaylight

Editor: Abhijit Kumbhare

Introduction

OpenDaylight (ODL) is a modular open platform for customizing and automating networks of any size and scale. The OpenDaylight Project arose out of the SDN movement, with a clear focus on network programmability. It was designed from the outset as a foundation for commercial solutions that address a variety of use cases in existing network environments.

OpenDaylight Architecture

Model-Driven

The core of the OpenDaylight platform is the Model-Driven Service Abstraction Layer (MD-SAL). In OpenDaylight, underlying network devices and network applications are all represented as objects, or models, whose interactions are processed within the SAL.

Image Added

The SAL is a data exchange and adaptation mechanism between YANG models representing network devices and applications. The YANG models provide generalized descriptions of a device or application’s capabilities without requiring either to know the specific implementation details of the other. Within the SAL, models are simply defined by their respective roles in a given interaction. A “producer” model implements an API and provides the API’s data; a “consumer” model uses the API and consumes the API’s data. While ‘northbound’ and ‘southbound’ provide a network engineer’s view of the SAL, ‘consumer’ and ‘producer’ are more accurate descriptions of interactions within the SAL. For example, protocol plugin and its associated model can either be a producer of information about the underlying network, or a consumer of application instructions it receives via the SAL.

The SAL matches producers and consumers from its data stores and exchanges information. A consumer can find a provider that it’s interested in. A producer can generate notifications; a consumer can receive notifications and issue RPCs to get data from providers. A producer can insert data into SAL’s storage; a consumer can read data from SAL’s storage. A producer

...

  • 3.0.2
  • 3.0.1
  • 3.0.0

...

  • 31 January 2019
  • 30 November 2018
  • 15 April 2019

...

3.3 OpenDaylight

Editor: Abhijit Kumbhare

Introduction

OpenDaylight (ODL) is a modular open platform for customizing and automating networks of any size and scale. The OpenDaylight Project arose out of the SDN movement, with a clear focus on network programmability. It was designed from the outset as a foundation for commercial solutions that address a variety of use cases in existing network environments.

OpenDaylight Architecture

Model-Driven

The core of the OpenDaylight platform is the Model-Driven Service Abstraction Layer (MD-SAL). In OpenDaylight, underlying network devices and network applications are all represented as objects, or models, whose interactions are processed within the SAL.

Image Removed

The SAL is a data exchange and adaptation mechanism between YANG models representing network devices and applications. The YANG models provide generalized descriptions of a device or application’s capabilities without requiring either to know the specific implementation details of the other. Within the SAL, models are simply defined by their respective roles in a given interaction. A “producer” model implements an API and provides the API’s data; a “consumer” model consumer uses the API and consumes the API’s data. While ‘northbound’ and ‘southbound’ provide a network engineer’s view of the SAL, ‘consumer’ and ‘producer’ are more accurate descriptions of interactions within the SAL. For example, protocol plugin and its associated model can either be a producer of information about the underlying network, or a consumer of application instructions it receives via the SAL.

The SAL matches producers and consumers from its data stores and exchanges information. A consumer can find a provider that it’s interested in. A producer can generate notifications; a consumer can receive notifications and issue RPCs to get data from providers. A producer can insert data into SAL’s storage; a consumer can read data from SAL’s storage. A producer implements an API and provides the API’s data; a consumer uses the API and consumes the API’s data.

Modular and Multiprotocol

The ODL platform is designed to allow downstream users and solution providers maximum flexibility in building a controller to fit their needs. The modular design of the ODL platform allows anyone in the ODL ecosystem to leverage services created by others; to write and incorporate their own; and to share their work with others. ODL includes support for the broadest set of protocols in any SDN platform – OpenFlow, OVSDB, NETCONF, BGP and many more – that improve programmability of modern networks and solve a range of user needs.

Southbound protocols and control plane services, anchored by the MD-SAL, can be individually selected or written, and packaged together according to the requirements of a given use case. A controller package is built around four key components (odlparent, controller, MD-SAL and yangtools). To this, the solution developer adds a relevant group of southbound protocols plugins, most or all of the standard control plane functions, and some select number of embedded and external controller applications, managed by policy.

Each of these components is isolated as a Karaf feature, to ensure that new work doesn’t interfere with mature, tested code. OpenDaylight uses OSGi and Maven to build a package that manages these Karaf features and their interactions.

This modular framework allows developers and users to:

  • Only install the protocols and services they need
  • Combine multiple services and protocols to solve more complex problems as needs arise
  • Incrementally and collaboratively evolve the capabilities of the open source platform
  • Quickly develop custom, value-added features for highly specialized use cases, leveraging a common platform shared across the industry.

Bottomline about the Architecture

The modularity and flexibility of OpenDaylight allows end users to select whichever features matter to them and to create controllers that meets their individual needs.

Use Cases

As we saw, the OpenDaylight platform (ODL) provides a flexible common platform underpinning a wide variety of applications and Use Cases. Some of the most common use cases are mentioned here. 

ONAP

The SDN Controller (SDNC) and the Application Controller (APPC) components of ONAP are based on OpenDaylight.

Leveraging the common code base provided by Common Controller Software Development Kit (CCSDK), ONAP hasprovides two application level configuration and lifecycle management controller modules called ONAP SDN-C and ONAP App-C . These controllers manage the state of a single Resource (Network or Application). Both provide similar services (application level configuration using NetConf, Chef, Ansible, RestConf, etc.) and life cycle management functions (e.g. stop, resume, health check, etc.). The ONAP SDN-C has been used mainly for Layer1-3 network elements and the ONAP App-C is being used for Layer4-7 network functions. The ONAP SDN-C and the ONAP App-C components are extended from OpenDaylight controller framework.

The ONAP SDN-C leverages the model driven architecture in OpenDaylight. As illustrated , the ONAP SDN-C leverages the OpenDaylight framework composed of API handlers, operational and configuration trees, and network adapters for network device configurations. Within this framework, the ONAP Service Logic Interpreter (SLI) newly introduced provides an extensible scripting language to express service logic through the Directed Graph builder based on Node-Red . The service logic is written how network service parameters (e.g. L3VPN) given from the northbound API are mapped onto network device configuration parameters consumed by external SDN controllers attaching to the ONAP SDN-C.

Image Removed

External SDN controller

External SDN controller interfaces with the southbound interface of the ONAP SDN-C and is used to manage the Layer1-3 network devices in each network domain. Over the interface, the network configuration parameters extracted from the service logic are passed to the external SDN controllers. An OpenDaylight as an external SDN controller supports parameters for L3VPN, L2VPN, PCEP, NETCONF and more. The external OpenDyalight controller deploy the given configurations into the network devices.

Network Virtualization for Cloud and NFV 

OpenDaylight NetVirt App can be used to provide network virtualization (overlay connectivity) inside and between data centers for Cloud SDN use case:

  • VxLAN within the data center
  • L3 VPN across data centers

The components used to provide Network Virtualization is shown in the diagram below:

Image Removed

Network Abstraction

OpenDaylight can expose Network Services API for northbound applications for Network Automation in a multi vendor network.

Image Removed

These are just a few of the common use cases but the platform can and continues to be tailored to several other use cases. 

3.4 Open Switch

Editor: Mike Lazar

Overview

OpenSwitch (OPX) – an open source network operating system (NOS) and ecosystem - is an early adopter of emerging concepts and technologies (hardware and software disaggregation, use of open source, SDN, NFV and DevOps) which disrupt how networks and networking equipment are built and operated. Designed using a standard Debian Linux distribution with an unmodified Linux kernel, OpenSwitch provides a programmable high-level abstraction of network components, such as switching ASICs (Network Processors) and optical transceivers. Architected as a scalable, cloud-ready, agile solution, the open source OpenSwitch software implements a flexible infrastructure to enable both network operators and vendors to rapidly on-board open source Networking OS applications. OpenSwitch provides a YANG based programmatic interface, that can be accessed using Python, thus providing an environment well-suited for DevOps.

OpenSwitch Features

OpenSwitch (or OPX) provides an abstraction of hardware network devices in a Linux OS environment. It has been designed from its inception in order to support the newest technologies and concepts in the networking industry:

  • In OPX, software is disaggregated from hardware, and software components are disaggregated as well.
    • OpenSwitch can be deployed on diverse networking hardware  – only the low-level software layers SAI (Switch Abstraction Interface) and System Device Interface (SDI) are hardware specific and may need to be adapted.  A minimum requirement is for hardware to be build around a standard ASIC, with Layer 2 switching, Layer 3 routing, ACL and QoS functionality.

  • Makes extensive use of standard open source software, for instance:
    • ONIE installer
    • Linux Debian distribution with an unmodified Linux kernel 
    • Switch Abstraction Interface (SAI) defined in Open Compute Project for interfacing with the networking ASIC.
  • Integrates Linux native APIs with networking ASIC functionality. In OpenSwitch, networking features are also accessible using the Linux standard API’s (“netlink”). Thus standard open source network packages (such as FRR) can be installed and supported in binary format.
  • OPX supports containers and NFV. The Docker container environment (Docker CE), or any other Linux container environment,  can be installed on any OpenSwitch device - in this environment, users can deploy their own containerized virtualized network functions (VNF).
  • Supports programmability, automation and DevOps:
    • A robust and flexible programmatic interface – namely the Control Plane Services (CPS). The API is defined using YANG models and is accessible through Python (and C/C++). 
    • The availability of a programmatic interface (CPS API/YANG models) allows integration with external orchestrators and SDN controllers
  • Provides a rich set of networking features including full access to the networking ASIC ACL and QoS functionality using CPS API/YANG models.

OpenSwitch provides support for:

  • L2 protocols: LLDP, LACP (link aggregation interfaces), 802.1q (VLAN interfaces), STP and bridge interfaces
  • L3 protocols (e.g. BGP)
  • ACL and QoS network functions (through CPS / YANG API's)
  • Instrumentation: sFlow, telemetry
  • Orchestration and management

Programmability and Automation 

OpenSwitch supports a rich ecosystem for automated deployment, for instance:

  • Ansible – various modules are already defined for OpenSwitch
  • Zero-touch provisioning (ZTP) allows provisioning of OpenSwitch ONIE-enabled devices automatically, without manual intervention
  • Puppet

North-Bound Programmatic Interfaces

The OpenSwitch CPS programmatic interface is defined using YANG models, and in combination with Python, provides support for programming the network functionality, automation and DevOps. While the CPS API is the native OpenSwitch API, a REST API can be added as well, by mapping REST requests to CPS.

In addition, a set of OpenSwitch specific commands are available and can be invoked from a Linux shell (e.g. display the current software version, hardware inventory etc.).

OpenSwitch Architecture

The figure below illustrates the main areas of the OpenSwitch architecture:

Image Removed

OPX Base

The key components of OPX Base are:

NAS – Network Adaptation Service

  • Manages the high level abstraction of the switching ASIC
  • NAS manages the middle-ware that associates physical ports to Linux interfaces, and adapts Linux native API calls (e.g. netlink) to the switching ASIC

PAS- Platform Adaptation Service

  • A higher-level abstraction and aggregation of the functionality provided by the SDI component

CPS – Control Plane Service 

  • Object centric framework
  • Mediates between application software components and the platform
  • Provides a pub/sub model and set/get/delete/create
  • Provides the framework for defining YANG modeled APIs - with Python and C/C++ bindings. In OPX, YANG models are used with an efficient CPS binary encoding.

SAI – Switch Abstraction Interface

  • SAI API is an open interface that abstracts vendor-specific switching ASIC behavior

SDI – System Device Interface

  • An API that provides a low level abstraction of platform specific hardware devices (e.g. fans, power supplies, sensors…)

OPX Applications

A variety of open source or vendor specific applications have been tested and can be deployed with OpenSwitch:

  • FRR - BGP
  • AAA: TACACS+, RADIUS
  • Telemetry: Broadview, Packet Trakker 
  • Inocybe OpenDaylight integration
  • NetSNMP
  • Puppet
  • Chef

It should be noted that these applications are not pre-installed with OpenSwitch. In a "disaggregated" model, users select applications to install them based on the requirements of a given network deployment.

In general, since OpenSwitch is based on Linux Debian distribution with an unmodified kernel, any Debian binary application can be installed and executed on OpenSwitch devices.

Hardware Simulation

OPX software supports hardware virtualization (or simulation). Software simulation of basic hardware functionality is also provided (simulation specific SAI and SDI components), and the higher layer software functionality can be developed and tested on generic PC/server hardware. OPX hardware simulation can be executed under Virtual Box, GNS3 / QEmu etc.

3.5 OPNFV and CNTT

Editor: Rabi Abdel

Introduction

The Common NFVI telco Taskforce (CNTT) is a LFN sponsored Taskforce (alongside GSMA sponsorship) and is an initiative that is aiming to minimise the number of Network Function Virtualisation Infrastructure (NFVI) configurations available (as a result of fragmentation of technologies/solutions provided by Open Source Project) and come up with few set of standardised infrastructure profiles for various NFV workloads as well as IT workloads (More information about CNTT can be found in CNTT GitHub).

CNTT compromises of:

  • Reference Model: that explains how the infrastructure is exposed to workload in a standard way.
  • Reference Architecture(s): Open Stack based and Kubernetes based (at the time when this WP is written) to deliver a conformant infrastructure based on those technologies.
  • Reference Implementation(s):  Open Stack and K8s based (at the time when this WP is written) to be used as a basis of any testing and validation activities.
  • Reference Certification(s): Creating extensive set of testing harnesses to be used to test the conformance of any vendor implemented infrastructure to the CNTT specifications.

Image Removed

Figure X: The scope of CNTT.

Open Platform for NFV (OPNFV) is a project and community that facilitates a common NFVI, continuous integration (CI) with upstream projects, stand-alone testing toolsets, and a compliance and verification program for industry-wide testing and integration to accelerate the transformation of enterprise and service provider networks. Participation is open to anyone, whether you are an employee of a member company or just passionate about network transformation.

Proposed list of keywords for distributed completion of the OPNFV overview:

Question to all contributors: Shall we draw content from this existing OPNFV overview: https://docs.opnfv.org/en/stable-iruya/release/overview.html

Modular and Multiprotocol

The ODL platform is designed to allow downstream users and solution providers maximum flexibility in building a controller to fit their needs. The modular design of the ODL platform allows anyone in the ODL ecosystem to leverage services created by others; to write and incorporate their own; and to share their work with others. ODL includes support for the broadest set of protocols in any SDN platform – OpenFlow, OVSDB, NETCONF, BGP and many more – that improve programmability of modern networks and solve a range of user needs.

Southbound protocols and control plane services, anchored by the MD-SAL, can be individually selected or written, and packaged together according to the requirements of a given use case. A controller package is built around four key components (odlparent, controller, MD-SAL and yangtools). To this, the solution developer adds a relevant group of southbound protocols plugins, most or all of the standard control plane functions, and some select number of embedded and external controller applications, managed by policy.

Each of these components is isolated as a Karaf feature, to ensure that new work doesn’t interfere with mature, tested code. OpenDaylight uses OSGi and Maven to build a package that manages these Karaf features and their interactions.

This modular framework allows developers and users to:

  • Only install the protocols and services they need
  • Combine multiple services and protocols to solve more complex problems as needs arise
  • Incrementally and collaboratively evolve the capabilities of the open source platform
  • Quickly develop custom, value-added features for highly specialized use cases, leveraging a common platform shared across the industry.

Bottomline about the Architecture

The modularity and flexibility of OpenDaylight allows end users to select whichever features matter to them and to create controllers that meets their individual needs.

Use Cases

As we saw, the OpenDaylight platform (ODL) provides a flexible common platform underpinning a wide variety of applications and Use Cases. Some of the most common use cases are mentioned here. 

ONAP

Leveraging the common code base provided by Common Controller Software Development Kit (CCSDK), ONAP hasprovides two application level configuration and lifecycle management controller modules called ONAP SDN-C and ONAP App-C . These controllers manage the state of a single Resource (Network or Application). Both provide similar services (application level configuration using NetConf, Chef, Ansible, RestConf, etc.) and life cycle management functions (e.g. stop, resume, health check, etc.). The ONAP SDN-C has been used mainly for Layer1-3 network elements and the ONAP App-C is being used for Layer4-7 network functions. The ONAP SDN-C and the ONAP App-C components are extended from OpenDaylight controller framework.

The ONAP SDN-C leverages the model driven architecture in OpenDaylight. As illustrated , the ONAP SDN-C leverages the OpenDaylight framework composed of API handlers, operational and configuration trees, and network adapters for network device configurations. Within this framework, the ONAP Service Logic Interpreter (SLI) newly introduced provides an extensible scripting language to express service logic through the Directed Graph builder based on Node-Red . The service logic is written how network service parameters (e.g. L3VPN) given from the northbound API are mapped onto network device configuration parameters consumed by external SDN controllers attaching to the ONAP SDN-C.

Image Added

External SDN controller

External SDN controller interfaces with the southbound interface of the ONAP SDN-C and is used to manage the Layer1-3 network devices in each network domain. Over the interface, the network configuration parameters extracted from the service logic are passed to the external SDN controllers. An OpenDaylight as an external SDN controller supports parameters for L3VPN, L2VPN, PCEP, NETCONF and more. The external OpenDyalight controller deploy the given configurations into the network devices.

Network Virtualization for Cloud and NFV 

OpenDaylight NetVirt App can be used to provide network virtualization (overlay connectivity) inside and between data centers for Cloud SDN use case:

  • VxLAN within the data center
  • L3 VPN across data centers

The components used to provide Network Virtualization is shown in the diagram below:

Image Added


Network Abstraction

OpenDaylight can expose Network Services API for northbound applications for Network Automation in a multi vendor network.

Image Added

These are just a few of the common use cases but the platform can and continues to be tailored to several other use cases. 

3.4 Open Switch

Editor: Mike Lazar

Overview

OpenSwitch (OPX) – an open source network operating system (NOS) and ecosystem - is an early adopter of emerging concepts and technologies (hardware and software disaggregation, use of open source, SDN, NFV and DevOps) which disrupt how networks and networking equipment are built and operated. Designed using a standard Debian Linux distribution with an unmodified Linux kernel, OpenSwitch provides a programmable high-level abstraction of network components, such as switching ASICs (Network Processors) and optical transceivers. Architected as a scalable, cloud-ready, agile solution, the open source OpenSwitch software implements a flexible infrastructure to enable both network operators and vendors to rapidly on-board open source Networking OS applications. OpenSwitch provides a YANG based programmatic interface, that can be accessed using Python, thus providing an environment well-suited for DevOps.

OpenSwitch Features

OpenSwitch (or OPX) provides an abstraction of hardware network devices in a Linux OS environment. It has been designed from its inception in order to support the newest technologies and concepts in the networking industry:

  • In OPX, software is disaggregated from hardware, and software components are disaggregated as well.
    • OpenSwitch can be deployed on diverse networking hardware  – only the low-level software layers SAI (Switch Abstraction Interface) and System Device Interface (SDI) are hardware specific and may need to be adapted.  A minimum requirement is for hardware to be build around a standard ASIC, with Layer 2 switching, Layer 3 routing, ACL and QoS functionality.

  • Makes extensive use of standard open source software, for instance:
    • ONIE installer
    • Linux Debian distribution with an unmodified Linux kernel 
    • Switch Abstraction Interface (SAI) defined in Open Compute Project for interfacing with the networking ASIC.
  • Integrates Linux native APIs with networking ASIC functionality. In OpenSwitch, networking features are also accessible using the Linux standard API’s (“netlink”). Thus standard open source network packages (such as FRR) can be installed and supported in binary format.
  • OPX supports containers and NFV. The Docker container environment (Docker CE), or any other Linux container environment,  can be installed on any OpenSwitch device - in this environment, users can deploy their own containerized virtualized network functions (VNF).
  • Supports programmability, automation and DevOps:
    • A robust and flexible programmatic interface – namely the Control Plane Services (CPS). The API is defined using YANG models and is accessible through Python (and C/C++). 
    • The availability of a programmatic interface (CPS API/YANG models) allows integration with external orchestrators and SDN controllers
  • Provides a rich set of networking features including full access to the networking ASIC ACL and QoS functionality using CPS API/YANG models.

OpenSwitch provides support for:

  • L2 protocols: LLDP, LACP (link aggregation interfaces), 802.1q (VLAN interfaces), STP and bridge interfaces
  • L3 protocols (e.g. BGP)
  • ACL and QoS network functions (through CPS / YANG API's)
  • Instrumentation: sFlow, telemetry
  • Orchestration and management

Programmability and Automation 

OpenSwitch supports a rich ecosystem for automated deployment, for instance:

  • Ansible – various modules are already defined for OpenSwitch
  • Zero-touch provisioning (ZTP) allows provisioning of OpenSwitch ONIE-enabled devices automatically, without manual intervention
  • Puppet

North-Bound Programmatic Interfaces

The OpenSwitch CPS programmatic interface is defined using YANG models, and in combination with Python, provides support for programming the network functionality, automation and DevOps. While the CPS API is the native OpenSwitch API, a REST API can be added as well, by mapping REST requests to CPS.

In addition, a set of OpenSwitch specific commands are available and can be invoked from a Linux shell (e.g. display the current software version, hardware inventory etc.).

OpenSwitch Architecture

The figure below illustrates the main areas of the OpenSwitch architecture:


Image Added

OPX Base

The key components of OPX Base are:

NAS – Network Adaptation Service

  • Manages the high level abstraction of the switching ASIC
  • NAS manages the middle-ware that associates physical ports to Linux interfaces, and adapts Linux native API calls (e.g. netlink) to the switching ASIC

PAS- Platform Adaptation Service

  • A higher-level abstraction and aggregation of the functionality provided by the SDI component

CPS – Control Plane Service 

  • Object centric framework
  • Mediates between application software components and the platform
  • Provides a pub/sub model and set/get/delete/create
  • Provides the framework for defining YANG modeled APIs - with Python and C/C++ bindings. In OPX, YANG models are used with an efficient CPS binary encoding.

SAI – Switch Abstraction Interface

  • SAI API is an open interface that abstracts vendor-specific switching ASIC behavior

SDI – System Device Interface

  • An API that provides a low level abstraction of platform specific hardware devices (e.g. fans, power supplies, sensors…)

OPX Applications

A variety of open source or vendor specific applications have been tested and can be deployed with OpenSwitch:

  • FRR - BGP
  • AAA: TACACS+, RADIUS
  • Telemetry: Broadview, Packet Trakker 
  • Inocybe OpenDaylight integration
  • NetSNMP
  • Puppet
  • Chef

It should be noted that these applications are not pre-installed with OpenSwitch. In a "disaggregated" model, users select applications to install them based on the requirements of a given network deployment.

In general, since OpenSwitch is based on Linux Debian distribution with an unmodified kernel, any Debian binary application can be installed and executed on OpenSwitch devices.

Hardware Simulation

OPX software supports hardware virtualization (or simulation). Software simulation of basic hardware functionality is also provided (simulation specific SAI and SDI components), and the higher layer software functionality can be developed and tested on generic PC/server hardware. OPX hardware simulation can be executed under Virtual Box, GNS3 / QEmu etc.

3.5 OPNFV and CNTT

Editor: Rabi Abdel

Introduction

The Common NFVI telco Taskforce (CNTT) is a LFN sponsored Taskforce (alongside GSMA sponsorship) and is an initiative that is aiming to minimise the number of Network Function Virtualisation Infrastructure (NFVI) configurations available (as a result of fragmentation of technologies/solutions provided by Open Source Project) and come up with few set of standardised infrastructure profiles for various NFV workloads as well as IT workloads (More information about CNTT can be found in CNTT GitHub).

CNTT compromises of:

  • Reference Model: that explains how the infrastructure is exposed to workload in a standard way.
  • Reference Architecture(s): Open Stack based and Kubernetes based (at the time when this WP is written) to deliver a conformant infrastructure based on those technologies.
  • Reference Implementation(s):  Open Stack and K8s based (at the time when this WP is written) to be used as a basis of any testing and validation activities.
  • Reference Certification(s): Creating extensive set of testing harnesses to be used to test the conformance of any vendor implemented infrastructure to the CNTT specifications.

Image Added

Figure X: The scope of CNTT.


Open Platform for NFV (OPNFV) is a project and community that facilitates a common NFVI, continuous integration (CI) with upstream projects, stand-alone testing toolsets, and a compliance and verification program for industry-wide testing and integration to accelerate the transformation of enterprise and service provider networks. Participation is open to anyone, whether you are an employee of a member company or just passionate about network transformation.

  • Test tools
    • Functest and X-testing: The  Functest project provides comprehensive testing methodology, test suites and test cases to test and verify OPNFV Platform functionality that covers the VIM and NFVI components.This project uses a "top-down" approach that will start with chosen ETSI NFV use-case/s and open source VNFs for the functional testing. The approach taken will be to

      • break down the use-case into simple operations and functions required.
      • specify necessary network topologies
      • develop traffic profiles
      • develop necessary test traffic
      • Ideally VNFs will be Open Source however proprietary VNFs may also be used as needed.

      This project will develop test suites that cover detailed functional test cases, test methodologies and platform configurations which will be documented and maintained in a repository for use by other OPNFV testing projects and the community in general. Developing test suites will also help lay the foundation for a test automation framework that in future can be used by the continuation integration (CI) project (Octopus). Certain VNF deployment use-cases could be automatically tested as an optional step of the CI process.  The project targets testing of the OPNFV platform in a hosted test-bed environment (i.e. using the OPNFV test labs world wide).

      The X-testing aspect intends to integrate many test projects into a single, lightweight framework  for automation that leverages the existing test-api and testdb for publishing results.
  • Test tools
    • Functest + Xtesting
    • VSPerf: Although originally named to emphasize data plane benchmarking and performance testing of vSwitch and NFV Infrastructure, VSPerf has expanded its scope to multiple types of networking technologies (Kernel Bypass and Cloud-Native) and allow deployment in multiple scenarios (such as containers). VSperf can utilize several different Traffic Generators and Receivers for testing, including several popular Hardware and Software-based systems. The VSPerf tool has many modes of operation, including the "traffic-generator-only" mode, where any virtual network manager sets up the path to be tested, and VSPerf automates the traffic generation and results reporting. VSperf is compliant with ETSI NFV TST009 and IETF RFC 2544.
    • StorPerf: A key challenge to measuring disk performance is to know when it is performing at a consistent and repeatable level of performance. Initial writes to a volume can perform poorly due to block allocation, and reads can appear instantaneous when reading empty blocks. The Storage Network Industry Association (SNIA) has developed methods which enable manufacturers to set, and customers to compare, the performance specifications of Solid State Storage devices. StorPerf applies this methodology to virtual and physical storage services to provide a high level of confidence in the performance metrics in the shortest reasonable time.
    • NFVBench
    • Yardstick
    • : NFVbench is a lightweight end-to-end dataplane benchmarking framework project. It includes traffic generator(s) and measures a number of packet performance related metrics.
  • Lab as a service
    • Lab as a Service (LaaS) is a “bare-metal cloud” hosting resource for the LFN community. We host compute and network resources that are installed and configured on demand for the developers through an online web portal. The highly configurable nature of LaaS means that users can reserve a Pharos compliant or CNTT compliant POD. Resources are booked are scheduled in blocks of time, ensuring individual projects and users do now monopolize resources.  By providing a lab environment to developers, we enable more testing, faster development, and better collaboration between LFN projects.
  • CI/CD for continuous deployment and testing of NFVI stacks
  • OPNFV Lab Infrastructure
    • OPNFV leverages globally distributed community labs provided by member organizations. These labs are used by both developers of OPNFV projects as well as the extensive CI/CD tooling to continuously deploy and test OPNFV reference stacks. In order to ensure a consistent environment across different labs, OPNFV community labs follow a lab specification (Pharos spec) defining a high-level hardware configuration and network topology. In the context of CNTT reference implementations, any updates will be added to the Pharos specification in future releases.

  • Feature projects working towards closing feature gaps in upstream open source communities providing the components for building full NFVI stacks
  • Deployment tools
    • Airship
    • Fuel / MCP

...

  • Aggregate data like logs, metrics and network telemetry
  • Scale up to consume millions of messages per second
  • Efficiently distribute data with publish and subscribe model
  • Process bulk data in batches, or streaming data in real-time
  • Manage lifecycle of applications that process and analyze data
  • Let you explore data using interactive notebooks

PNDA Architecture

Image RemovedImage Added


PNDA Operational View

...