Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Added steps for installing BMRA

...

Current state - Need to evaluate if the current BMRA specs and requirements are good for near term because of Python 2 being EOL, and if there are plans to update it in the BMRA roadmap. In the meantime, package the playbooks in containers & evolve the BM engine to deploy it to resolve dependency issues. 


BMRA v1.4.1 Installation

The following is based on configuration and installation outside of OPNFV Intel Lab.

The OS used for Jump, Master and Worker nodes is CentOS 7.8.2003 (3.10.0-957.12.2.el7.x86_64)

Prepare worker node

Prior to installing BMRA, log on the worker node and check the hardware configuration. This information is used when configuring BMRA later.

CPU:

  • $ lscpu 
  • Note down the number of cores and the enumeration method used for cores/threads.

Interfaces / PFs:

  • $ ip a
    • Note down interfaces to be used with SR-IOV
  • $ lspci | grep Eth 
    • Check what PFs are available
  • $ lspci -n <bus:device.function> 
    • Get the vendor and device IDs, e.g. 8086:1572

Prepare BMRA

Start by installing tools needed for running the BMRA playbook:

$ yum update
$ yum install git epel-release python36
$ yum install python-pip
$ pip install ansible==2.7.16 jmespath
$ pip install jinja2 --upgrade


Prepare the BMRA software:

$ git clone https://github.com/intel/container-experience-kits.git
$ cd container-experience-kits/
$ git checkout <tag>
- v1.4.1 (If using Kubernetes 1.16)t
- v1.3.1 (If using Kubernetes 1.15)
- v1.2.1 (If using Kubernetes 1.14)
$ git submodule update --init
$ cp examples/inventory.ini .
$ cp -r examples/group_vars examples/host_vars .

Update inventory.ini to match the the hardware and size of deployment. A minimal setup can look as follows:

[all]
master1 ansible_host=<master_ip> ip=<master_ip>
node1 ansible_host=<worker_ip> ip=<worker_ip>

[kube-master]
master1

[etcd]
master1

[kube-node]
node1

[k8s-cluster:children]
kube-master
kube-node

[calico-rr]

Update host_vars/node1.yml (and any additional files depending on inventory.ini):

  • sriov_enabled  - Change to true if VFs should be created during installation
  • sriov_nics  - Update with interface names (PF) from the node. Number of VFs and driver can be changed too
  • vpp_enabled  & ovs_dpdk_enabled - Disable one or both depending on need for a vSwitch. Using both might cause issues.
  • force_nic_drivers_update - Set to false if SSH connection to machine uses interface with i40e or i40evf driver (otherwise connection to node is likely to be broken)
  • install_ddp_packages  - Set to false if DDP (Dynamic Device Personalization) should not be installed 
  • isolcpus - Change according to HW and need for isolated cores/treads. Also relevant for CMK configuration (see below)
  • sst_bf_configuration_enabled  - Consider setting this to false unless platform and HW supports it
  • Additional changes can be done as needed

Update group_vars/all.yml according to hardware and capabilities:

  • cmk_hosts_list - Update according to inventory.ini, e.g. only node1 for the above example
  • cmk_shared_num_cores  & cmk_exclusive_num_cores  - Update according to available cores/threads and the number of isolated cores
    • Make sure sufficient cores/threads are isolated to support the shared and exclusive pools (see isolcpus above)
  • sriovdp_config_data  - Update according to the sriov_nics host_vars config (see above)
    • Updated might be needed depending on NICs. See details on Github
  • qat_dp_enabled  - Change to false if HW doesn't support QAT, either through chipset of PCI addon
  • gpu_dp_enabled  - Change to false if not supported on HW
  • tas_enabled  - Can be set to false if not needed
  • tas_create_policy - Set to false if TAS is not enabled 
  • cluster_name  - Can be changed to something more specific than "cluster.local"

Install BMRA 

Once the necessary files have been updated, BMRA can be installed

  • Consider creating a tmux  or screen session to prevent installation issues due to disconnects
  • Run the installer: ansible-playbook -i inventory.ini playbooks/cluster.yml 

Post BMRA Install

Once installation is complete, you can decide if you will access the cluster from the master node, of from the jump/installer node.

  • Using Master: SSH to the master node, and check status using kubectl get nodes 
  • Using Install/Jump: Install Kubectl, fetch Kubeconfig and set environment variable
  • Test Kubectl using kubectl get nodes