Skip to content

1. Deployment Guide

This section walks you through the process of deploying TeraFlowSDN on top of a machine running MicroK8s Kubernetes platform. The guide includes the details on configuring and installing the machine, installing and configuring MicroK8s, and deploying and reporting the status of the TeraFlowSDN controller.

1.1. Configure your Machine

In this section, we describe how to configure a machine (physical or virtual) to be used as the deployment, execution, and development environment for the ETSI TeraFlowSDN controller. Choose your preferred environment below and follow the instructions provided.

NOTE: If you already have a remote physical server fitting the requirements specified in this section feel free to use it instead of deploying a local VM. Check 1.1.1. Physical Server for further details.

Virtualization platforms tested are:

1.1.1. Physical Server

This section describes how to configure a physical server for running ETSI TeraFlowSDN(TFS) controller.

Server Specifications

Minimum Server Specifications for development and basic deployment

  • CPU: 4 cores
  • RAM: 8 GB
  • Disk: 60 GB
  • 1 GbE NIC

Recommended Server Specifications for development and basic deployment

  • CPU: 6 cores
  • RAM: 12 GB
  • Disk: 80 GB
  • 1 GbE NIC

Server Specifications for best development and deployment experience

  • CPU: 8 cores
  • RAM: 32 GB
  • Disk: 120 GB
  • 1 GbE NIC

NOTE: the specifications listed above are provided as a reference. They depend also on the CPU clock frequency, the RAM memory, the disk technology and speed, etc.

For development purposes, it is recommended to run the VSCode IDE (or the IDE of your choice) in a more powerful server, for instance, the recommended server specifications for development and basic deployment.

Given that TeraFlowSDN follows a micro-services architecture, for the deployment, it might be better to use many clusterized servers with many slower cores than a single server with few highly performant cores.

Clusterized Deployment

You might consider creating a cluster of machines each featuring, at least, the minimum server specifications. That solution brings you scalability in the future.

Networking

No explicit indications are given in terms of networking besides that servers need access to the Internet for downloading dependencies, binaries, and packages while building and deploying the TeraFlowSDN components.

Besides that, the network requirements are essentially the same than that required for running a classical Kubernetes environment. To facilitate the deployment, we extensively use MicroK8s, thus the network requirements are, essentially, the same demanded by MicroK8s, especially, if you consider creating a Kubernetes cluster.

As a reference, the other deployment solutions based on VMs assume the VM is connected to a virtual network configured with the IP range 10.0.2.0/24 and have the gateway at IP 10.0.2.1. The VMs have the IP address 10.0.2.10.

The minimum required ports to be accessible are:

  • 22/SSH : for management purposes
  • 80/HTTP : for the TeraFlowSDN WebUI and Grafana dashboard
  • 8081/HTTPS : for the CockroachDB WebUI

Other ports might be required if you consider to deploy addons such as Kubernetes observability, etc. The details on these ports are left appart given they might vary depending on the Kubernetes environment you use.

Operating System

The recommended Operating System for deploying TeraFlowSDN is Ubuntu Server 24.04 LTS or Ubuntu Server 22.04 LTS or Ubuntu Server 20.04 LTS. Other version might work, but we have not tested them. We strongly recommend using Long Term Support (LTS) versions as they provide better stability.

Below we provide some installation guidelines:

  • Installation Language: English
  • Autodetect your keyboard
  • If asked, select "Ubuntu Server" (do not select "Ubuntu Server (minimized)").
  • Configure static network specifications (adapt them based on your particular setup):
Interface IPv4 Method Subnet Address Gateway Name servers Search domains
enp0s3 Manual 10.0.2.0/24 10.0.2.10 10.0.2.1 8.8.8.8,8.8.4.4
  • Leave proxy and mirror addresses as they are
  • Let the installer self-upgrade (if asked).
  • Use an entire disk for the installation
    • Disable setup of the disk as LVM group
    • Double check that NO swap space is allocated in the partition table. Kubernetes does not work properly with SWAP.
  • Configure your user and system names:
    • User name: TeraFlowSDN
    • Server's name: tfs-vm
    • Username: tfs
    • Password: tfs123
  • Install Open SSH Server
    • Import SSH keys, if any.
  • Featured Server Snaps
    • Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.
  • Let the system install and upgrade the packages.
    • This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.
  • Restart the VM when the installation is completed.

Upgrade the Ubuntu distribution

sudo apt-get update -y
sudo apt-get dist-upgrade -y
  • If asked to restart services, restart the default ones proposed.
  • Restart the VM when the installation is completed.

1.1.2. Oracle Virtual Box

This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using Oracle VirtualBox. It has been tested with VirtualBox up to version 6.1.40 r154048.

Create a NAT Network in VirtualBox

In "Oracle VM VirtualBox Manager", Menu "File > Preferences... > Network", create a NAT network with the following specifications:

Name CIDR DHCP IPv6
TFS-NAT-Net 10.0.2.0/24 Disabled Disabled

Within the newly created "TFS-NAT-Net" NAT network, configure the following IPv4 forwarding rules:

Name Protocol Host IP Host Port Guest IP Guest Port
SSH TCP 127.0.0.1 2200 10.0.2.10 22
HTTP TCP 127.0.0.1 8080 10.0.2.10 80

Note: IP address 10.0.2.10 is the one that will be assigned to the VM.

Create VM in VirtualBox:

  • Name: TFS-VM
  • Type/Version: Linux / Ubuntu (64-bit)
  • CPU (*): 4 vCPUs @ 100% execution capacity
  • RAM: 8 GB
  • Disk: 60 GB, Virtual Disk Image (VDI), Dynamically allocated
  • Optical Drive ISO Image: "ubuntu-22.04.X-live-server-amd64.iso"
    • Download the latest Long Term Support (LTS) version of the Ubuntu Server image from Ubuntu 22.04 LTS, e.g., "ubuntu-22.04.X-live-server-amd64.iso".
    • Note: use Ubuntu Server image instead of Ubuntu Desktop to create a lightweight VM.
  • Network Adapter 1 (*): enabled, attached to NAT Network "TFS-NAT-Net"
  • Minor adjustments (*):
    • Audio: disabled
    • Boot order: disable "Floppy"

Note: (*) settings to be editing after the VM is created.

Install Ubuntu 22.04 LTS Operating System

In "Oracle VM VirtualBox Manager", start the VM in normal mode, and follow the installation procedure. Below we provide some installation guidelines:

  • Installation Language: English
  • Autodetect your keyboard
  • If asked, select "Ubuntu Server" (do not select "Ubuntu Server (minimized)").
  • Configure static network specifications:
Interface IPv4 Method Subnet Address Gateway Name servers Search domains
enp0s3 Manual 10.0.2.0/24 10.0.2.10 10.0.2.1 8.8.8.8,8.8.4.4
  • Leave proxy and mirror addresses as they are
  • Let the installer self-upgrade (if asked).
  • Use an entire disk for the installation
    • Disable setup of the disk as LVM group
    • Double check that NO swap space is allocated in the partition table. Kubernetes does not work properly with SWAP.
  • Configure your user and system names:
    • User name: TeraFlowSDN
    • Server's name: tfs-vm
    • Username: tfs
    • Password: tfs123
  • Install Open SSH Server
    • Import SSH keys, if any.
  • Featured Server Snaps
    • Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.
  • Let the system install and upgrade the packages.
    • This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.
  • Restart the VM when the installation is completed.

Upgrade the Ubuntu distribution

sudo apt-get update -y
sudo apt-get dist-upgrade -y
  • If asked to restart services, restart the default ones proposed.
  • Restart the VM when the installation is completed.

Install VirtualBox Guest Additions

On VirtualBox Manager, open the VM main screen. If you are running the VM in headless mode, right click over the VM in the VirtualBox Manager window and click "Show". If a dialog informing about how to leave the interface of the VM is shown, confirm pressing "Switch" button. The interface of the VM should appear.

Click menu "Device > Insert Guest Additions CD image..."

On the VM terminal, type:

sudo apt-get install -y linux-headers-$(uname -r) build-essential dkms
  # This command might take some minutes depending on your VM specs and your Internet access speed.
sudo mount /dev/cdrom /mnt/
cd /mnt/
sudo ./VBoxLinuxAdditions.run
  # This command might take some minutes depending on your VM specs.
sudo reboot

1.1.3. VMWare Fusion

This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using VMWare Fusion. It has been tested with VMWare Fusion version 12 and 13.

Create VM in VMWare Fusion:

In "VMWare Fusion" manager, create a new network from the "Settings/Network" menu.

  • Unlock to make changes
  • Press the + icon and create a new network
  • Change the name to TFS-NAT-Net
  • Check "Allow virtual machines on this network to connect to external network (NAT)"
  • Do not check "Enable IPv6"
  • Add port forwarding for HTTP and SSH
  • Uncheck "Provide address on this network via DHCP"

Create a new VM an Ubuntu 22.04.1 ISO:

  • Display Name: TeraFlowSDN
  • Username: tfs
  • Password: tfs123

On the next screen press "Customize Settings", save the VM and in "Settings" change:

  • Change to use 4 CPUs
  • Change to access 8 GB of RAM
  • Change disk to size 60 GB
  • Change the network interface to use the previously created TFS-NAT-Net

Run the VM to start the installation.

Install Ubuntu 22.04.1 LTS Operating System

The installation will be automatic, without any configuration required.

  • Configure the guest IP, gateway and DNS:

    Using the Network Settings for the wired connection, set the IP to 10.0.2.10, the mask to 255.255.255.0, the gateway to 10.0.2.2 and the DNS to 10.0.2.2.

  • Disable and remove swap file:

    $ sudo swapoff -a $ sudo rm /swapfile

    Then you can remove or comment the /swapfile entry in /etc/fstab

  • Install Open SSH Server

    • Import SSH keys, if any.
  • Restart the VM when the installation is completed.

Upgrade the Ubuntu distribution

sudo apt-get update -y
sudo apt-get dist-upgrade -y

1.1.4. OpenStack

This section describes how to configure a VM for running ETSI TeraFlowSDN(TFS) controller using OpenStack. It has been tested with OpenStack Kolla up to Yoga version.

Create a Security Group in OpenStack

In OpenStack, go to Project - Network - Security Groups - Create Security Group with name TFS

Add the following rules:

Direction Ether Type IP Protocol Port Range Remote IP Prefix
Ingress IPv4 TCP 22 (SSH) 0.0.0.0/0
Ingress IPv4 TCP 2200 0.0.0.0/0
Ingress IPv4 TCP 8080 0.0.0.0/0
Ingress IPv4 TCP 80 0.0.0.0/0
Egress IPv4 Any Any 0.0.0.0/0
Egress IPv6 Any Any ::/0

Note: The IP address will be assigned depending on the network you have configured inside OpenStack. This IP will have to be modified in TeraFlow configuration files which by default use IP 10.0.2.10

Create a flavour

From dashboard (Horizon)

Go to Admin - Compute - Flavors and press Create Flavor

  • Name: TFS
  • VCPUs: 4
  • RAM (MB): 8192
  • Root Disk (GB): 60

From CLI

 openstack flavor create TFS --id auto --ram 8192 --disk 60 --vcpus 8

Create an instance in OpenStack:

  • Instance name: TFS-VM
  • Origin: [Ubuntu-22.04 cloud image] (https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img)
  • Create new volume: No
  • Flavor: TFS
  • Networks: extnet
  • Security Groups: TFS
  • Configuration: Include the following cloud-config
#cloud-config
# Modifies the password for the VM instance
username: ubuntu
password: <your-password>
chpasswd: { expire: False }
ssh_pwauth: True

Upgrade the Ubuntu distribution

sudo apt-get update -y
sudo apt-get dist-upgrade -y
  • If asked to restart services, restart the default ones proposed.
  • Restart the VM when the installation is completed.

1.1.5. Vagrant Box

This section describes how to create a Vagrant Box, using the base virtual machine configured in Oracle Virtual Box.

Virtual Machine specifications

Most of the specifications can be as specified in the Oracle Virtual Box page, however, there are a few particularities to Vagrant that must be accommodated, such as:

  • Virtual Hard Disk
    • Size: 60GB (at least)
    • Type: VMDK

Screenshottt_from_2024-10-21_18-13-43

Also, before initiating the VM and installing the OS, we'll need to:

  • Disable Floppy in the 'Boot Order'
  • Disable audio
  • Disable USB
  • Ensure Network Adapter 1 is set to NAT

Network configurations

At Network Adapt 1, the following port-forwarding rule must be set.

Name Protocol Host IP Host Port Guest IP Guest Port
SSH TCP 2222 22

Screenshot_from_2023-07-10_18-25-18

Installing the OS

For a Vagrant Box, it is generally suggested that the ISO's server version is used, as it is intended to be used via SSH, and any web GUI is expected to be forwarded to the host.

Screenshot_from_2023-07-10_18-41-49

Screenshot_from_2023-07-10_18-42-30

Screenshot_from_2023-07-10_18-42-45

Make sure the disk is not configured as an LVM group!

Screenshot_from_2023-07-10_18-43-16

Vagrant ser

Vagrant expects by default, that in the box's OS exists the user vagrant with the password also being vagrant.

Screenshot_from_2023-07-10_18-54-12

SSH

Vagrant uses SSH to connect to the boxes, so installing it now will save the hassle of doing it later.

Screenshot_from_2023-07-10_18-54-48

Features server snaps

Do not install featured server snaps. It will be done manually later to illustrate how to uninstall and reinstall them in case of trouble with.

Updates

Let the system install and upgrade the packages. This operation might take some minutes depending on how old is the Optical Drive ISO image you use and your Internet connection speed.

Upgrade the Ubuntu distribution

sudo apt-get update -y
sudo apt-get dist-upgrade -y
  • If asked to restart services, restart the default ones proposed.
  • Restart the VM when the installation is completed.

Install VirtualBox Guest Additions

On VirtualBox Manager, open the VM main screen. If you are running the VM in headless mode, right-click over the VM in the VirtualBox Manager window, and click "Show". If a dialog informing about how to leave the interface of the VM is shown, confirm by pressing the "Switch" button. The interface of the VM should appear.

Click the menu "Device > Insert Guest Additions CD image..."

On the VM terminal, type:

sudo apt-get install -y linux-headers-$(uname -r) build-essential dkms
  # This command might take some minutes depending on your VM specs and your Internet access speed.
sudo mount /dev/cdrom /mnt/
cd /mnt/
sudo ./VBoxLinuxAdditions.run
  # This command might take some minutes depending on your VM specs.
sudo reboot

ETSI TFS Installation

After this, proceed to 1.2. Install Microk8s, after which, return to this wiki to finish the Vagrant Box creation.

Box configuration and creation

Make sure the ETSI TFS controller is correctly configured. You will not be able to change it after!

It is advisable to do the next configurations from a host's terminal, via a SSH connection.

ssh -p 2222 vagrant@127.0.0.1

Set root password

Set the root password to vagrant.

sudo passwd root

Set the superuser

Set up the Vagrant user so that it’s able to use sudo without being prompted for a password. Anything in the /etc/sudoers.d/* directory is included in the sudoers privileges when created by the root user. Create a new sudo file.

sudo visudo -f /etc/sudoers.d/vagrant

and add the following lines

# add vagrant user
vagrant ALL=(ALL) NOPASSWD:ALL

You can now test that it works by running a simple command.

sudo pwd

Issuing this command should result in an immediate response without a request for a password.

Install the Vagrant key

Vagrant uses a default set of SSH keys for you to directly connect to boxes via the CLI command vagrant ssh, after which it creates a new set of SSH keys for your new box. Because of this, we need to load the default key to be able to access the box after created.

chmod 0700 /home/vagrant/.ssh
wget --no-check-certificate https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub -O /home/vagrant/.ssh/authorized_keys
chmod 0600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant /home/vagrant/.ssh

Configure the OpenSSH Server

Edit the /etc/ssh/sshd_config file:

sudo vim /etc/ssh/sshd_config

And uncomment the following line:

AuthorizedKeysFile %h/.ssh/authorized_keys

Then restart SSH.

sudo service ssh restart

Package the box

Before you package the box, if you intend to make your box public, it is best to clean your bash history with:

history -c

Exit the SSH connection, and at you're host machine, package the VM:

vagrant package --base teraflowsdncontroller --output teraflowsdncontroller.box

Test run the box

Add the base box to you local Vagrant box list:

vagrant box add --name teraflowsdncontroller ./teraflowsdncontroller.box

Now you should try to run it, for that you'll need to create a Vagrantfile. For a simple run, this is the minimal required code for this box:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "teraflowsdncontroller"
  config.vm.box_version = "1.1.0"
  config.vm.network :forwarded_port, host: 8080 ,guest: 80
end

Now you'll be able to spin up the virtual machine by issuing the command:

vagrant up

And connect to the machine using:

vagrant ssh

Pre-configured boxes

If you do not wish to create your own Vagrant Box, you can use one of the existing ones created by TFS contributors. - davidjosearaujo/teraflowsdncontroller - ...

To use them, you simply have to create a Vagrantfile and run vagrant up controller in the same directory. The following example Vagrantfile already allows you to do just that, with the bonus of exposing the multiple management GUIs to your localhost.

Vagrant.configure("2") do |config|

  config.vm.define "controller" do |controller|
    controller.vm.box = "davidjosearaujo/teraflowsdncontroller"
    controller.vm.network "forwarded_port", guest: 80, host: 8080     # WebUI
    controller.vm.network "forwarded_port", guest: 8084, host: 50750  # Linkerd Viz Dashboard
    controller.vm.network "forwarded_port", guest: 8081, host: 8081   # CockroachDB Dashboard
    controller.vm.network "forwarded_port", guest: 8222, host: 8222   # NATS Dashboard
    controller.vm.network "forwarded_port", guest: 9000, host: 9000   # QuestDB Dashboard
    controller.vm.network "forwarded_port", guest: 9090, host: 9090   # Prometheus Dashboard

    # Setup Linkerd Viz reverse proxy
    ## Copy config file
    controller.vm.provision "file" do |f|
      f.source = "./reverse-proxy-linkerdviz.sh"
      f.destination = "./reverse-proxy-linkerdviz.sh"
    end
    ## Execute configuration file
    controller.vm.provision "shell" do |s|
      s.inline = "chmod +x ./reverse-proxy-linkerdviz.sh && ./reverse-proxy-linkerdviz.sh"
    end

    # Update controller source code to the desired branch
    if ENV['BRANCH'] != nil
      controller.vm.provision "shell" do |s|
        s.inline = "cd ./tfs-ctrl && git pull && git switch " + ENV['BRANCH']
      end
    end

  end
end

This Vagrantfile also allows for optional repository updates on startup by running the command with a specified environment variable BRANCH

BRANCH=develop vagrant up controller

Linkerd DNS rebinding bypass

Because of Linkerd's security measures against DNS rebinding, a reverse proxy, that modifies the request's header Host field, is needed to expose the GUI to the host. The previous Vagrantfile already deploys such configurations, for that, all you need to do is create the reverse-proxy-linkerdviz.sh file in the same directory. The content of this file is displayed below.

# Install NGINX
sudo apt update && sudo apt install nginx -y

# NGINX reverse proxy configuration
echo 'server {
    listen 8084;

    location / {
        proxy_pass http://127.0.0.1:50750;
        proxy_set_header Host localhost;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}' > /home/vagrant/expose-linkerd

# Create symlink of the NGINX configuration file
sudo ln -s /home/vagrant/expose-linkerd /etc/nginx/sites-enabled/

# Commit the reverse proxy configurations
sudo systemctl restart nginx

# Enable start on login
echo "linkerd viz dashboard &" >> .profile

# Start dashboard
linkerd viz dashboard &

echo "Linkerd Viz dashboard running!"

1.2. Install MicroK8s

This section describes how to deploy the MicroK8s Kubernetes platform and configure it to be used with ETSI TeraFlowSDN controller. Besides, Docker is installed to build docker images for the ETSI TeraFlowSDN controller.

The steps described in this section might take some minutes depending on your internet connection speed and the resources assigned to your VM, or the specifications of your physical server.

To facilitate work, these steps are easier to be executed through an SSH connection, for instance using tools like PuTTY or MobaXterm.

Upgrade the Ubuntu distribution

Skip this step if you already did it during the creation of the VM.

sudo apt-get update -y
sudo apt-get dist-upgrade -y

Install prerequisites

sudo apt-get install -y ca-certificates curl gnupg lsb-release snapd jq

Install Docker CE

Install Docker CE and Docker BuildX plugin

sudo apt-get install -y docker.io docker-buildx

NOTE: Starting from Docker v23, Build architecture has been updated and docker build command entered into deprecation process in favor of the new docker buildx build command. Package docker-buildx provides the new docker buildx build command.

Add key "insecure-registries" with the private repository to the daemon configuration. It is done in two commands since sometimes read from and write to same file might cause trouble.

if [ -s /etc/docker/daemon.json ]; then cat /etc/docker/daemon.json; else echo '{}'; fi \
    | jq 'if has("insecure-registries") then . else .+ {"insecure-registries": []} end' -- \
    | jq '."insecure-registries" |= (.+ ["localhost:32000"] | unique)' -- \
    | tee tmp.daemon.json
sudo mv tmp.daemon.json /etc/docker/daemon.json
sudo chown root:root /etc/docker/daemon.json
sudo chmod 600 /etc/docker/daemon.json

Restart the Docker daemon

sudo systemctl restart docker

Install MicroK8s

Important: By default, Kubernetes uses CIDR 10.1.0.0/16 for pods and CIDR 10.152.183.0/24 for services. If they conflict with your internal network CIDR, you might need to change Kubernetes CIDRs at deployment time. To do so, check links below and ask for support if needed.

# Install MicroK8s
sudo snap install microk8s --classic --channel=1.29/stable

# Create alias for command "microk8s.kubectl" to be usable as "kubectl"
sudo snap alias microk8s.kubectl kubectl

It is important to make sure that ufw will not interfere with the internal pod-to-pod and pod-to-Internet traffic. To do so, first check the status. If ufw is active, use the following command to enable the communication.


# Verify status of ufw firewall
sudo ufw status

# If ufw is active, install following rules to enable access pod-to-pod and pod-to-internet
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed

NOTE: MicroK8s can be used to compose a Highly Available Kubernetes cluster enabling you to construct an environment combining the CPU, RAM and storage resources of multiple machines. If you are interested in this procedure, review the official instructions in How to build a highly available Kubernetes cluster with MicroK8s, in particular, the step Create a MicroK8s multi-node cluster.

References:

Add user to the docker and microk8s groups

It is important that your user has the permission to run docker and microk8s in the terminal. To allow this, you need to add your user to the docker and microk8s groups with the following commands:

sudo usermod -a -G docker $USER
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER $HOME/.kube
sudo reboot

In case that you get trouble executing the following commands, might due to the .kube folder is not automatically provisioned into your home folder, you may follow the steps below:

mkdir -p $HOME/.kube
sudo chown -f -R $USER $HOME/.kube
microk8s config > $HOME/.kube/config
sudo reboot

Check status of Kubernetes and addons

To retrieve the status of Kubernetes once, run the following command:

microk8s.status --wait-ready

To retrieve the status of Kubernetes periodically (e.g., every 1 second), run the following command:

watch -n 1 microk8s.status --wait-ready

Check all resources in Kubernetes

To retrieve the status of the Kubernetes resources once, run the following command:

kubectl get all --all-namespaces

To retrieve the status of the Kubernetes resources periodically (e.g., every 1 second), run the following command:

watch -n 1 kubectl get all --all-namespaces

Enable addons

First, we need to enable the community plugins (maintained by third parties):

microk8s.enable community

The Addons to be enabled are:

  • dns: enables resolving the pods and services by name
  • helm3: required to install NATS
  • hostpath-storage: enables providing storage for the pods (required by registry)
  • ingress: deploys an ingress controller to expose the microservices outside Kubernetes
  • registry: deploys a private registry for the TFS controller images
  • linkerd: deploys the linkerd service mesh used for load balancing among replicas
  • prometheus: set of tools that enable TFS observability through per-component instrumentation
  • metrics-server: deploys the Kubernetes metrics server for API access to service metrics
microk8s.enable dns helm3 hostpath-storage ingress registry prometheus metrics-server linkerd

Important: Enabling some of the addons might take few minutes. Do not proceed with next steps until the addons are ready. Otherwise, the deployment might fail. To confirm everything is up and running:

  1. Periodically Check the status of Kubernetes until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, linkerd, metrics-server, prometheus, registry, storage] in the enabled block.
  2. Periodically Check Kubernetes resources until all pods are Ready and Running.
  3. If it takes too long for the Pods to be ready, we observed that rebooting the machine may help.

Then, create aliases to make the commands easier to access:

sudo snap alias microk8s.helm3 helm3
sudo snap alias microk8s.linkerd linkerd

To validate that linkerd is working correctly, run:

linkerd check

To validate that the metrics-server is working correctly, run:

kubectl top pods --all-namespaces

and you should see a screen similar to the top command in Linux, showing the columns namespace, pod name, CPU (cores), and MEMORY (bytes).

In case pods are not starting, check information from pods logs. For example, linkerd is sensitive for proper /etc/resolv.conf syntax.

kubectl logs <podname> --namespace <namespace>

If the command shows an error message, also restarting the machine might help.

Stop, Restart, and Redeploy

Find below some additional commands you might need while you work with MicroK8s:

microk8s.stop  # stop MicroK8s cluster (for instance, before power off your computer)
microk8s.start # start MicroK8s cluster
microk8s.reset # reset infrastructure to a clean state

If the following commands does not work to recover the MicroK8s cluster, you can redeploy it.

If you want to keep MicroK8s configuration, use:

sudo snap remove microk8s

If you need to completely drop MicroK8s and its complete configuration, use:

sudo snap remove microk8s --purge
sudo apt-get remove --purge docker.io docker-buildx

IMPORTANT: After uninstalling MicroK8s, it is convenient to reboot the computer (the VM if you work on a VM, or the physical computer if you use a physical computer). Otherwise, there are system configurations that are not correctly cleaned. Especially in what port forwarding and firewall rules matters.

After the reboot, redeploy as it is described in this section.

1.3. Deploy TeraFlowSDN

This section describes how to deploy TeraFlowSDN controller on top of MicroK8s using the environment configured in the previous sections.

Install prerequisites

sudo apt-get install -y git curl jq

Clone the Git repository of the TeraFlowSDN controller

Clone from ETSI-hosted GitLab code repository:

mkdir ~/tfs-ctrl
git clone https://labs.etsi.org/rep/tfs/controller.git ~/tfs-ctrl

Important: The original H2020-TeraFlow project hosted on GitLab.com has been archieved and will not receive further contributions/updates. Please, clone from ETSI-hosted GitLab code repository.

Checkout the appropriate Git branch

TeraFlowSDN controller versions can be found in the appropriate release tags and/or branches as described in Home > Versions.

By default the branch master is checked out and points to the latest stable version of the TeraFlowSDN controller, while branch develop contains the latest developments and contributions under test and validation.

To switch to the appropriate branch run the following command, changing develop by the name of the branch you want to deploy:

cd ~/tfs-ctrl
git checkout develop

Prepare a deployment script with the deployment settings

Create a new deployment script, e.g., my_deploy.sh, adding the appropriate settings as follows. This section provides just an overview of the available settings. An example my_deploy.sh script is provided in the root folder of the project for your convenience with full description of all the settings.

Note: The example my_deploy.sh script provides reasonable settings for deploying a functional and complete enough TeraFlowSDN controller, and a brief description of their meaning. To see extended descriptions, check scripts in the deploy folder.

cd ~/tfs-ctrl
tee my_deploy.sh >/dev/null << EOF
# ----- TeraFlowSDN ------------------------------------------------------------
export TFS_REGISTRY_IMAGES="http://localhost:32000/tfs/"
export TFS_COMPONENTS="context device pathcomp service nbi webui"
export TFS_IMAGE_TAG="dev"
export TFS_K8S_NAMESPACE="tfs"
export TFS_EXTRA_MANIFESTS="manifests/nginx_ingress_http.yaml"
export TFS_GRAFANA_PASSWORD="admin123+"
export TFS_SKIP_BUILD=""

# ----- CockroachDB ------------------------------------------------------------
export CRDB_NAMESPACE="crdb"
export CRDB_EXT_PORT_SQL="26257"
export CRDB_EXT_PORT_HTTP="8081"
export CRDB_USERNAME="tfs"
export CRDB_PASSWORD="tfs123"
export CRDB_DEPLOY_MODE="single"
export CRDB_DROP_DATABASE_IF_EXISTS="YES"
export CRDB_REDEPLOY=""

# ----- NATS -------------------------------------------------------------------
export NATS_NAMESPACE="nats"
export NATS_EXT_PORT_CLIENT="4222"
export NATS_EXT_PORT_HTTP="8222"
export NATS_REDEPLOY=""

# ----- QuestDB ----------------------------------------------------------------
export QDB_NAMESPACE="qdb"
export QDB_EXT_PORT_SQL="8812"
export QDB_EXT_PORT_ILP="9009"
export QDB_EXT_PORT_HTTP="9000"
export QDB_USERNAME="admin"
export QDB_PASSWORD="quest"
export QDB_TABLE_MONITORING_KPIS="tfs_monitoring_kpis"
export QDB_TABLE_SLICE_GROUPS="tfs_slice_groups"
export QDB_DROP_TABLES_IF_EXIST="YES"
export QDB_REDEPLOY=""

EOF

The settings are organized in 4 sections:

  • Section TeraFlowSDN:
    • TFS_REGISTRY_IMAGE enables to specify the private Docker registry to be used, by default, we assume to use the Docker respository enabled in MicroK8s.
    • TFS_COMPONENTS specifies the components their Docker image will be rebuilt, uploaded to the private Docker registry, and deployed in Kubernetes.
    • TFS_IMAGE_TAG defines the tag to be used for Docker images being rebuilt and uploaded to the private Docker registry.
    • TFS_K8S_NAMESPACE specifies the name of the Kubernetes namespace to be used for deploying the TFS components.
    • TFS_EXTRA_MANIFESTS enables to provide additional manifests to be applied into the Kubernetes environment during the deployment. Typical use case is to deploy ingress controllers, service monitors for Prometheus, etc.
    • TFS_GRAFANA_PASSWORD lets you specify the password you want to use for the admin user of the Grafana instance being deployed and linked to the Monitoring component.
    • TFS_SKIP_BUILD, if set to YES, prevents rebuilding the Docker images. That means, the deploy script will redeploy existing Docker images without rebuilding/updating them.
  • Section CockroachDB: enables to configure the deployment of the backend CockroachDB database.
  • Section NATS: enables to configure the deployment of the backend NATS message broker.
  • Section QuestDB: enables to configure the deployment of the backend QuestDB timeseries database.

Confirm that MicroK8s is running

Run the following command:

microk8s status

If it is reported microk8s is not running, try microk8s start, run the following command to start MicroK8s:

microk8s start

Confirm everything is up and running:

  1. Periodically Check the status of Kubernetes until you see the addons [dns, ha-cluster, helm3, hostpath-storage, ingress, registry, storage] in the enabled block.
  2. Periodically Check Kubernetes resources until all pods are Ready and Running.

Deploy TFS controller

First, source the deployment settings defined in the previous section. This way, you do not need to specify the environment variables in each and every command you execute to operate the TFS controller. Be aware to re-source the file if you open new terminal sessions. Then, run the following command to deploy TeraFlowSDN controller on top of the MicroK8s Kubernetes platform.

cd ~/tfs-ctrl
source my_deploy.sh
./deploy/all.sh

The script performs the following steps:

  • Executes script ./deploy/crdb.sh to automate deployment of CockroachDB database used by Context component.
    • The script automatically checks if CockroachDB is already deployed.
    • If there are settings instructing to drop the database and/or redeploy CockroachDB, it does the appropriate actions to honor them as defined in previous section.
  • Executes script ./deploy/nats.sh to automate deployment of NATS message broker used by Context component.
    • The script automatically checks if NATS is already deployed.
    • If there are settings instructing to redeploy the message broker, it does the appropriate actions to honor them as defined in previous section.
  • Executes script ./deploy/qdb.sh to automate deployment of QuestDB timeseries database used by Monitoring component.
    • The script automatically checks if QuestDB is already deployed.
    • If there are settings instructing to redeploy the timeseries database, it does the appropriate actions to honor them as defined in previous section.
  • Executes script ./deploy/tfs.sh to automate deployment of TeraFlowSDN.
    • Creates the namespace defined in TFS_K8S_NAMESPACE
    • Creates secrets for CockroachDB, NATS, and QuestDB to be used by Context and Monitoring components.
    • Builds the Docker images for the components defined in TFS_COMPONENTS
    • Tags the Docker images with the value of TFS_IMAGE_TAG
    • Pushes the Docker images to the repository defined in TFS_REGISTRY_IMAGE
    • Deploys the components defined in TFS_COMPONENTS
    • Creates the file tfs_runtime_env_vars.sh with the environment variables for the components defined in TFS_COMPONENTS defining their local host addresses and their port numbers.
    • Applies extra manifests defined in TFS_EXTRA_MANIFESTS such as:
    • Creating an ingress controller listening at port 80 for HTTP connections to enable external access to the TeraFlowSDN WebUI, Grafana Dashboards, and Compute NBI interfaces.
    • Deploying service monitors to enable monitoring the performance of the components, device drivers and service handlers.
    • Initialize and configure the Grafana dashboards (if Monitoring component is deployed)
  • Report a summary of the deployment

1.4. WebUI and Grafana Dashboards

This section describes how to get access to the TeraFlowSDN controller WebUI and the monitoring Grafana dashboards.

Access the TeraFlowSDN WebUI

If you followed the installation steps based on MicroK8s, you got an ingress controller installed that exposes on TCP port 80.

Besides, the ingress controller defines the following reverse proxy paths (on your local machine):

  • http://127.0.0.1/webui: points to the WebUI of TeraFlowSDN.
  • http://127.0.0.1/grafana: points to the Grafana dashboards. This endpoint brings access to the monitoring dashboards of TeraFlowSDN. The credentials for the adminuser are those defined in the my_deploy.sh script, in the TFS_GRAFANA_PASSWORD variable.
  • http://127.0.0.1/restconf: points to the Compute component NBI based on RestCONF. This endpoint enables connecting external software, such as ETSI OpenSourceMANO NFV Orchestrator, to TeraFlowSDN.

Note: In the creation of the VM, a forward from host TCP port 8080 to VM's TCP port 80 is configured, so the WebUIs and REST APIs of TeraFlowSDN should be exposed on the endpoint 127.0.0.1:8080 of your local machine instead of 127.0.0.1:80.

1.5. Show Deployment and Logs

This section presents some helper scripts to inspect the status of the deployment and the logs of the components. These scripts are particularly helpful for troubleshooting during execution of experiments, development, and debugging.

Report the deployment of the TFS controller

The summary report given at the end of the Deploy TFS controller procedure can be generated manually at any time by running the following command. You can avoid sourcing my_deploy.sh if it has been already done.

cd ~/tfs-ctrl
source my_deploy.sh
./deploy/show.sh

Use this script to validate that all the pods, deployments, replica sets, ingress controller, etc. are ready and have the appropriate state, e.g., running for Pods, and the services are deployed and have appropriate IP addresses and port numbers.

Report the log of a specific TFS controller component

A number of scripts are pre-created in the scripts folder to facilitate the inspection of the component logs. For instance, to dump the log of the Context component, run the following command. You can avoid sourcing my_deploy.sh if it has been already done.

source my_deploy.sh
./scripts/show_logs_context.sh