Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Table of Contents
maxLevel2
absoluteUrltrue

Notes to section editors:

  • Describe goals, technologies, key features, etc.
  • This chapter does not replace your project specific technical documentation
  • You may include a reference to your community wiki space.
  • Try to avoid references to specific time-dependent aspects of your project that might render this whitepaper obsolete quickly.
  • Please keep the length of your section to 1-2 pages.

3.1 FD.IO

Editor: John DeNisco

                               (DRAFT)

FD.io (Fast data FD.io (Fast data – Input/Output) is a collection of several projects that support flexible, programmable packet processing services on a generic hardware platform. FD.io offers a landing site with multiple projects fostering innovations in software-based packet processing towards the creation of high-throughput, low-latency and resource-efficient IO services suitable to many architectures (x86, ARM, and PowerPC) and deployment environments (bare metal, VM, container). 

...

ONAP is enhanced with numerous features from release to release. Each release is named after a city.

...

A list of past and current releases may be found here:

https://wiki.onap.org/display/DW/Long+Term+Roadmap

3.3 OpenDaylight

Editor: Abhijit Kumbhare

Introduction

OpenDaylight

...

  • 3.0.2
  • 3.0.1
  • 3.0.0

...

  • 31 January 2019
  • 30 November 2018
  • 15 April 2019

...

3.3 OpenDaylight

Editor: Abhijit Kumbhare

Introduction

OpenDaylight (ODL) is a modular open platform for customizing and automating networks of any size and scale. The OpenDaylight Project arose out of the SDN movement, with a clear focus on network programmability. It was designed from the outset as a foundation for commercial solutions that address a variety of use cases in existing network environments.

...

As we saw, the OpenDaylight platform (ODL) provides a flexible common platform underpinning a wide variety of applications and Use Cases. Some of the most common use cases are mentioned here. 

ONAP

The SDN Controller (SDNC) and the Application Controller (APPC) components of ONAP are based on OpenDaylight.

Leveraging the Leveraging the common code base provided by Common Controller Software Development Kit (CCSDK), ONAP hasprovides two application level configuration and lifecycle management controller modules called ONAP SDN-C and ONAP App-C . These controllers manage the state of a single Resource (Network or Application). Both provide similar services (application level configuration using NetConf, Chef, Ansible, RestConf, etc.) and life cycle management functions (e.g. stop, resume, health check, etc.). The ONAP SDN-C has been used mainly for Layer1-3 network elements and the ONAP App-C is being used for Layer4-7 network functions. The ONAP SDN-C and the ONAP App-C components are extended from OpenDaylight controller framework.

The ONAP SDN-C leverages the model driven architecture in OpenDaylight. As illustrated , the ONAP SDN-C leverages the OpenDaylight framework composed of API handlers, operational and configuration trees, and network adapters for network device configurations. Within this framework, the ONAP Service Logic Interpreter (SLI) newly introduced provides an extensible scripting language to express service logic through the Directed Graph builder based on Node-Red . The service logic is written how network service parameters (e.g. L3VPN) given from the northbound API are mapped onto network device configuration parameters consumed by external SDN controllers attaching to the ONAP SDN-C.

External SDN controller

External SDN controller interfaces with the southbound interface of the ONAP SDN-C and is used to manage the Layer1-3 network devices in each network domain. Over the interface, the network configuration parameters extracted from the service logic are passed to the external SDN controllers. An OpenDaylight as an external SDN controller supports parameters for L3VPN, L2VPN, PCEP, NETCONF and more. The external OpenDyalight controller deploy the given configurations into the network devices.

Network Virtualization for Cloud and NFV 

...

Figure X: The scope of CNTT.


Open Platform for NFV (OPNFV) is a project and community that facilitates a common NFVI, continuous integration (CI) with upstream projects, stand-alone testing toolsets, and a compliance and verification program for industry-wide testing and integration to accelerate the transformation of enterprise and service provider networks. Participation is open to anyone, whether you are an employee of a member company or just passionate about network transformation.Proposed list of keywords for distributed completion of the OPNFV overview:

Question to all contributors: Shall we draw content from this existing OPNFV overview: https://docs.opnfv.org/en/stable-iruya/release/overview.html

  • Test tools
    • Functest

      + Xtesting

      and X-testing: The  Functest project provides comprehensive testing methodology, test suites and test cases to test and verify OPNFV Platform functionality that covers the VIM and NFVI components.This project uses a "top-down" approach that will start with chosen ETSI NFV use-case/s and open source VNFs for the functional testing. The approach taken will be to

      • break down the use-case into simple operations and functions required.
      • specify necessary network topologies
      • develop traffic profiles
      • develop necessary test traffic
      • Ideally VNFs will be Open Source however proprietary VNFs may also be used as needed.

      This project will develop test suites that cover detailed functional test cases, test methodologies and platform configurations which will be documented and maintained in a repository for use by other OPNFV testing projects and the community in general. Developing test suites will also help lay the foundation for a test automation framework that in future can be used by the continuation integration (CI) project (Octopus). Certain VNF deployment use-cases could be automatically tested as an optional step of the CI process.  The project targets testing of the OPNFV platform in a hosted test-bed environment (i.e. using the OPNFV test labs world wide).

      The X-testing aspect intends to integrate many test projects into a single, lightweight framework  for automation that leverages the existing test-api and testdb for publishing results.
    • VSPerf: Although originally named to emphasize data plane benchmarking and performance testing of vSwitch and VSPerf: Although originally named to emphasize data plane benchmarking and performance testing of vSwitch and NFV Infrastructure, VSPerf has expanded its scope to multiple types of networking technologies (Kernel Bypass and Cloud-Native) and allow deployment in multiple scenarios (such as containers). VSperf can utilize several different Traffic Generators and Receivers for testing, including several popular Hardware and Software-based systems. The VSPerf tool has many modes of operation, including the "traffic-generator-only" mode, where any virtual network manager sets up the path to be tested, and VSPerf automates the traffic generation and results reporting. VSperf is compliant with ETSI NFV TST009 and IETF RFC 2544.
    • StorPerf: A key challenge to measuring disk performance is to know when it is performing at a consistent and repeatable level of performance. Initial writes to a volume can perform poorly due to block allocation, and reads can appear instantaneous when reading empty blocks. The Storage Network Industry Association (SNIA) has developed methods which enable manufacturers to set, and customers to compare, the performance specifications of Solid State Storage devices. StorPerf applies this methodology to virtual and physical storage services to provide a high level of confidence in the performance metrics in the shortest reasonable time.
    • NFVBench
    • of confidence in the performance metrics in the shortest reasonable time.
    • NFVBench: NFVbench is a lightweight end-to-end dataplane benchmarking framework project. It includes traffic generator(s) and measures a number of packet performance related metrics.Yardstick
  • Lab as a service
    • Lab as a Service (LaaS) is a “bare-metal cloud” hosting resource for the LFN community. We host compute and network resources that are installed and configured on demand for the developers through an online web portal. The highly configurable nature of LaaS means that users can reserve a Pharos compliant or CNTT compliant POD. Resources are booked are scheduled in blocks of time, ensuring individual projects and users do now monopolize resources.  By providing a lab environment to developers, we enable more testing, faster development, and better collaboration between LFN projects.
  • CI/CD for continuous deployment and testing of NFVI stacks
  • OPNFV Lab Infrastructure
    • OPNFV leverages globally distributed community labs provided by member organizations. These labs are used by both developers of OPNFV projects as well as the extensive CI/CD tooling to continuously deploy and test OPNFV reference stacks. In order to ensure a consistent environment across different labs, OPNFV community labs follow a lab specification (Pharos spec) defining a high-level hardware configuration and network topology. In the context of CNTT reference implementations, any updates will be added to the Pharos specification in future releases.

  • Feature projects working towards closing feature gaps in upstream open source communities providing the components for building full NFVI stacks
  • Deployment tools
    • Airship
    • Fuel / MCP

...

  • Aggregate data like logs, metrics and network telemetry
  • Scale up to consume millions of messages per second
  • Efficiently distribute data with publish and subscribe model
  • Process bulk data in batches, or streaming data in real-time
  • Manage lifecycle of applications that process and analyze data
  • Let you explore data using interactive notebooks

PNDA Architecture

Image RemovedImage Added


PNDA Operational View

The PNDA dashboard provides an overview of the health of the PNDA components and all applications running on the PNDA platform. The health report includes active data path testing that verifies successful ingress, storage, query and batch consumption of live data.

Image Removed

3.7 SNAS

Editor: <TBD>

Streaming Network Analytics System (project SNAS) is a framework to collect, track and access tens of millions of routing objects (routers, peers, prefixes) in real time.

SNAS extracts data from BGP routers using a BGP Monitoring Protocol (BMP) interface. The data is parsed and made available to consumers through a Kafka message bus. Consumers applications in turn can perform further analytics and visualization of the topology data.

Image Removed

...

an overview of the health of the PNDA components and all applications running on the PNDA platform. The health report includes active data path testing that verifies successful ingress, storage, query and batch consumption of live data.

Image Added

3.7 Tungsten Fabric

Editor: Prabhjot Singh Sethi

...

  • Highly scalable, multi-tenant networking
  • Multi-tenant IP address management
  • DHCP, ARP proxies to avoid flooding into networks
  • Efficient edge replication for broadcast and multicast traffic
  • Local, per-tenant DNS resolution
  • Distributed firewall with access control lists
  • Application-based security policies based on tags
  • Distributed load balancing across hosts
  • Network address translation (1:1 floating IPs and distributed SNAT)
  • Service chaining with virtual network functions
  • Dual stack IPv4 and IPv6
  • BGP peering with gateway routers
  • BGP as a Service (BGPaaS) for distribution of routes between privately managed customer networks and service provider networks
  • Integration with VMware orchestration stack
  • SNAT)
  • Service chaining with virtual network functions
  • Dual stack IPv4 and IPv6
  • BGP peering with gateway routers
  • BGP as a Service (BGPaaS) for distribution of routes between privately managed customer networks and service provider networks
  • Integration with VMware orchestration stack


3.8 SNAS

Editor: <TBD>

Streaming Network Analytics System (project SNAS) is a framework to collect, track and access tens of millions of routing objects (routers, peers, prefixes) in real time.

SNAS extracts data from BGP routers using a BGP Monitoring Protocol (BMP) interface. The data is parsed and made available to consumers through a Kafka message bus. Consumers applications in turn can perform further analytics and visualization of the topology data.

Image Added