Deploying VirtalisReach 2022.3 on a Kubernetes cluster

Overview

This document covers deploying a complete Virtalis Reach 2022.3 system into a Kubernetes cluster. The target audience are system administrators and the content is highly technical, consisting primarily of shell commands that should be executed on the cluster administration shell.

The commands perform the actions required to deploy Virtalis Reach, however, you should read and understand what these commands do and be aware that your cluster or deployment may have a specific configuration. Virtalis Reach is a configurable platform consisting of many connected microservices allowing the deployment to be configured and adapted for different use-cases and environments.

If you are unsure of the usage or impact of a particular system command then seek advice. Improper use of server infrastructure can have serious consequences.

Prerequisites

Virtalis Reach requires:

Kubernetes cluster (either on premises or in the cloud):

  • At least version v1.22.7
  • 8 cores
  • At least 64GB of memory available to a single node (128GB total recommended)
  • 625GB of storage (see the storage section for more information)
  • Nginx as the cluster ingress controller
  • Access to the internet during the software deployment and update
  • A network policy compatible network plugin

Virtalis Reach does not require:

  • A GPU in the server
  • A connection to the internet following the software deployment

The follow administration tools are required along with their recommended tested version:

  • kubectl v1.22.7 - this package allows us to communicate with a Kubernetes cluster on the command line
  • helm 3 v3.9.0 - this package is used to help us install large Kubernetes charts consisting of numerous resources
  • oras v0.8.1 - this package is used to download an archive from our internal registry containing some configuration files which will be used to deploy Virtalis Reach
  • azure cli stable - this package is used to authenticate with our internal registry hosted on Azure
  • jq v1.6 - this package is used to parse and traverse JSON on the command line
  • yq v4.6.1 - this package is used to parse and traverse YAML on the command line

These tools are not installed on the Virtalis Reach server but only on the machine that will communicate with a Kubernetes cluster for the duration of the installation.

If using recent versions of Ubuntu, the Azure CLI as installed by Snap is called azure-cli not az which refers to an older version in the Ubuntu repos. Alias azure-cli to az if needed

Document Style

In this document, variables enclosed in angled brackets <VARIABLE> should be replaced with the appropriate values. For example:


docker login <my_id> <my_password> becomes docker login admin admin

In this document, commands to execute in a shell are shown as code and each block of code is designed to be a single command that can be copied and pasted.

These are commands to be entered in a shell in your clusters administration console:


This is another block of code \that uses "\" to escape newlines \and can be copy and pasted straight into your consoleSome steps have been included in a single bash script which can be inspected before being run.

Pre-installation

Set Up the Deployment Shell

Make a directory to store temporary installation files:


sudo mkdir -p /home/root/Reach && \cd /home/root/Reach && \sudo chown $(whoami) /home/root/Reach

Export the following variables:


export REACH_VERSION=2022.3.0export ACR_REGISTRY_NAME=virtaliscustomerexport SKIP_MIGRATIONS=1export TLS_SECRET_NAME=reach-tls-certexport REACH_NAMESPACE=reachSubstitute the variable values and export them:export REACH_DOMAIN=<the domain Virtalis Reach will be hosted on>export ACR_USERNAME=<service principal id>export ACR_PASSWORD=<service principal password>

Substitute and export the following variables, wrap the values in single quotes to prevent bash substitution:


export reach_licence__key=<reach licence xml snippet>export reach_licence__signature=<reach licence signature>

Example:


export reach_licence__key='<REACH><expires>123</expires></REACH>'export reach_licence__signature='o8k0niq63bPYOMS53NjgOTUqA0xfaBjfP5uB1uma'

Export the environment variables if Virtalis Reach TLS will be configured to use LetsEncrypt:


export KEYCLOAK_ANNOTATIONS="--set ingress.annotations\.cert-manager\.io/cluster-issuer=letsencrypt-prod"export INGRESS_ANNOTATIONS="--set ingress.annotations\.cert-manager\.io/cluster-issuer=letsencrypt-prod"

Optional configuration variables:


export MANAGED_TAG=<custom image tag for Virtalis Reach services>
export MQ_EXPOSE_INGRESS=<when set to 1, expose rabbitmq on the ingress>
 
export LOW_SPEC<set to true to set memory requests to a \
low amount for low spec machines, used for development>
 
export USE_NEO4J_MEMREC=<set to true to use neo4j memrec, \
use in conjunction with LOW_SPEC>

Checking the Nginx Ingress Controller


kubectl get pods -n ingress-nginx

This should return at least 1 running pod.

ingress-nginx nginx-ingress-controller…….. 1/1 Running

If Nginx is not installed, then please contact Virtalis to see if we can support a different ingress controller. Virtalis Reach is currently only compatible with Nginx.

If there is no Ingress controller currently installed on the cluster, and you are confident you should install Nginx, then you can execute these commands to install it:


helm repo add bitnami https://charts.bitnami.com/bitnami
 
helm repo update
 
helm install ingress
--create-namespace \
-n ingress-nginx \
bitnami/nginx-ingress-controller --version 9.1.10

Storage

Kubernetes supports a wide variety of volume plugins which allow you to provision storage dynamically as well as with constraints depending on your requirements.

List of supported volume plugins

All PersistentVolumes used by Virtalis Reach reserve 625gb of storage space in total. This is a provisional amount which will likely change depending on your workload.

Default

By default, Virtalis Reach is deployed with the local volume plugin which creates volumes on the worker nodes. This is not the recommended way to deploy Virtalis Reach and is only appropriate for test level deployments as all databases are tied to the single disk of the node that they’re deployed on which hinders the performance of the system.

To assist in dynamic local volume provisioning, we use the Local Path Provisioner service developed by Rancher:


kubectl apply -f \
https://raw.githubusercontent.com/rancher/\
local-path-provisioner/master/deploy/local-path-storage.yaml

Custom

You can customize how storage for Virtalis Reach is provisioned by specifying which storage class you want to use. This must be created by a Kubernetes Administrator beforehand or, in some environments, a default class is also suitable. For example, when deploying to an Azure Kubernetes Service instance, it comes with a default storage class on the cluster which can be used to request storage from Azure.

Express

If you only want to modify the storage class and leave all other parameters like size as default, export these variables out:


export REACH_SC=<name of storage class>export REACH_SC_ARGS=" --set persistence\.storageClass="${REACH_SC}" --set core\.persistentVolume.storageClass\="${REACH_SC}" --set master.persistence\.storageClass="${REACH_SC}" "

Custom parameters

A list of different databases in use by Virtalis Reach and how to customize their storage is shown below.

The default values can be found in /home/root/Reach/k8s/misc//values-common.yaml and <em>/home/root/Reach/k8s/misc//values-prod.yaml</em>

Minio

Please refer to the persistence: section found in the values.yaml file in the Bitnami Minio helm chart repository for a list of available parameters to customize such as size, access modes and so on.

values.yaml

Neo4j

Please refer to the core: persistentVolume: section found in the values.yaml file in the Neo4j helm chart repository for a list of available parameters to customize such as size, access modes and so on.

https://github.com/neo4j-contrib/neo4j-helm/blob/4.2.6-1/values.yaml

Alternatively, the Neo4j helm chart configuration documentation can also be found here https://neo4j.com/labs/neo4j-helm/1.0.0/configreference/

Mysql

Please refer to the master: persistence: section found in the values.yaml file in the Bitnami Mysql helm chart repository for a list of available parameters to customize such as size, access modes and so on.

https://github.com/bitnami/charts/blob/eeda6fcba43e1e98f37174479eb994badd2f5241/bitnami/mysql/values.yaml

Miscellaneous

Please refer to the persistence: section found in the values.yaml file in the Bitnami Rabbitmq helm chart repository for a list of available parameters to customize such as size, access modes and so on.

values.yaml

‍Deploying Virtalis Reach

Create a namespace:


kubectl create namespace "${REACH_NAMESPACE}"

Add namespace labels used by NetworkPolicies:


kubectl label namespace ingress-nginx reach-ingress=true; \
kubectl label namespace kube-system reach-egress=true

The ‘ingress-nginx’ entry on line 1 will have to be modified if your nginx ingress is deployed to a different namespace.

Configure Virtalis Reach TLS

Manually create a TLS secret from a TLS key and cert or use the LetsEncrypt integration with cert-manager.

Manually Creating a TLS Cert Secret


kubectl create secret tls -n "${REACH_NAMESPACE}" \
"${TLS_SECRET_NAME}" --key="tls.key" --cert="tls.crt"

LetsEncrypt with Cert-manager

Requirements:

  • The machine hosting Virtalis Reach can be reached via a public IP address (used to validate the ownership of your domain)
  • A domain that you own (cannot be used for domains ending with .local)
  • Inbound connections on port 80 are allowed

Create a namespace for cert-manager:


kubectl create namespace cert-manager

Install the recommended version of cert-manager:


kubectl apply -f https://github.com/jetstack/\
cert-manager/releases/download/v1.7.1/cert-manager.yaml

Create a new file:


nano prod_issuer.yaml

Paste in the following and replace variables wherever appropriate:


apiVersion: cert-manager.io/
kind: ClusterIssuer
metadata:
   name: letsencrypt-prod
   namespace: cert-manager
spec:
   acme:
   	server: https://acme-v02.api.letsencrypt.org/directory
   	# Email address used for ACME registration
   	email: <YOUR_EMAIL_ADDRESS>
   	privateKeySecretRef:
   	        name: reach-tls-cert
	solvers:
   	- http01:
           	ingress:
               	class: nginx

Source: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes

If you wish to do so, you can follow the digital ocean guide above and deploy an example service to test cert-manager before using it on Virtalis Reach.

Download Installation Files

Log in with Oras:


oras login "${ACR_REGISTRY_NAME}".azurecr.io \
--username "${ACR_USERNAME}" \
-p "${ACR_PASSWORD}"

Pull the Kubernetes deployment file archive from the Virtalis registry and unzip it:


oras pull "${ACR_REGISTRY_NAME}"\
.azurecr.io/misc/k8s:$REACH_VERSION &&
tar -zxvf k8s.tar.gz

Make the installation scripts executable:


cd k8s && sudo chmod +x *.sh && sudo chmod +x misc/keycloak/migration/*.sh

Create and Deploy Secrets

Randomised secrets are used to securely interconnect the Virtalis Reach microservices.

The script below uses the pwgen package to generate a random string of 30 alphanumeric characters. Before proceeding, make sure pwgen is installed on your machine.


./create-secrets.sh

Deploy Virtalis Reach and Database Services


./deploy.sh

Wait until all pods are showing up as Ready:


watch -n2 kubectl get pods -n $REACH_NAMESPACE

You will now be able to access the Virtalis Reach frontend client by opening the domain Virtalis Reach was installed on in a web-browser.

Install the Automated Backup System

Optionally install the automated backup system by referring to the Virtalis Reach Automated Backup System document or activate your own backup solution.

Retrieving the Keycloak Admin Password

Run the following command:


kubectl get secret --namespace ${REACH_NAMESPACE} \
keycloak -o jsonpath="{.data.admin_password}" \
| base64 --decode; echo

Refer to Virtalis Reach User Management for more information on how to administer the system inside Keycloak.

Post Deployment Clean-up


unset REACH_DOMAIN && \
unset TLS_SECRET_NAME && \
unset REACH_NAMESPACE && \
unset ACR_USERNAME && \
unset ACR_PASSWORD && \
unset ACR_REGISTRY_NAME && \
unset REACH_SC && \
unset REACH_SC_ARGS && \
unset reach_licence__key && \
unset reach_licence__signature

Clear bash history:


history -c

This will clean up any secrets exported in the system.

Test Network Policies

Virtalis Reach utilizes NetworkPolicies which restrict the communication of the internal service on a network level.

Note: NetworkPolicies require a supported Kubernetes network plugin like Cilium.

To test these policies, run a temporary pod:


kubectl run -it --rm test-pod \
-n ${REACH_NAMESPACE} --image=debian

Install the curl package:


apt update && apt install curl

Run a request to test the connection to one of our backend apis, this should return a timeout error:


curl http://artifact-access-api-service:5000

Exit the pod, which will delete it:


exit

Additionally, you can test the egress by checking that any outbound connections made to a public address are denied.

Get the name of the edge-server pod:


Get the name of the edge-server pod:
kubectl get pods -n ${REACH_NAMESPACE} | grep edge-server
Exec inside the running pod using the pod name from above:
kubectl exec -it <pod_name> -n ${REACH_NAMESPACE} -- /bin/bash
Running a command like apt update which makes an outbound request should timeout:
apt update

Exit the pod:


exit

Print page
2022.2
December 13, 2022 15:26

Need more?