Below is a comparison between BMRA to the content of the RA2 (Chapters 3 and 4). For now it will focus on the infrastructure (host, kubernetes), and the compute and networking services available in the infrastructure.
Documentation for BMRA is available here
3.10.0-1062.9.1 (as per documentation)
A minimum kernel version is needed due to compatibility with Kubeadm, and as such is covered in the BMRA deployment of Kubernetes
K8s version support
BMRA is released in 3 different versions, each mapped to a specific version of K8s following the (n-2) version support.
Control plane services
Kube-* services on master
Configurable through BMRA. Default configuration deploys on master node(s)
Configurable through BMRA. Default configuration has 3 master nodes with etcd enabled on all
Availability zones through K8s (node labels) is not supported. If availability zones are managed outside of K8s, BMRA supports spreading etcd across them (see other etcd requirements above)
CPU, Device, Topology
CPUManager and Device Plugin enabled by default, Topology Manager (with 'best-effort' policy) is included as default in BMRA configuration
Conformant with CRI and OCI
CPU Manager and CPU Pooler
The BMRA solution uses CMK, which provides functionality similar to CPU Pooler
For the most part, both offer similar functionality. Several features depend on additional CNIs for functionality.
(SHOULD) Centrally administrated and configured (req.inf.ntw.03)
According to RA2 documentation, this is unsupported in Multus and partially supported in DANM. I need more information about the requirements to fulfill this.
SR-IOV Device Plugin and CNI
BMRA supports installing and configuring SRIOV CNIs for use with containers. All details can be added to the BMRA configuration files, which ensures VFs and networks are available when deployment finishes.
There are ongoing discussions related to the use of CNI Multiplexers. More info can be found in 2020-06-18 - [CNTT - RA2] weekly meeting notes and recording.
Application Package Manager
Available in RA2
Utilizing Kubernetes APIs
Currently Helm v2 is used. It is expected that this will change to Helm v3 at some point, which removes the need for a server side component (Tiller)