II. Installation Instructions
1. Prerequisites
1.1. Client machine
Please install the following software on the machine that will be used to deploy VeridiumID:
Helm 3. Please check your installed version by running
helm version
1.2. Cluster
a) To ensure sufficient capacity and performance, the recommended minimum configuration for the cluster is to have three worker nodes with a combined total of at least 24 CPUs and 48GB of RAM. Each worker node needs to provide a minimum of 100GB local storage, in addition to an external volume capacity (EBS or similar) of 500GB that is necessary for data persistence. Please note that the resources allocated to the master nodes should be separate from the ones mentioned above.
b) About Operators: In order to deploy zookeeper, elasticsearch, cassandra in the namespace an operator will be installed for each one and the watchNamespace will be only the namespace created above. Please be sure you don't have an external operator to watch the namespace where the application will be installed.
c) Please create a StorageClass with "reclaimPolicy: Retain" + ReadWriteOnce(RWO) access mode + Encrypted (optional), this will be used for Cassandra Deployment / ElasticSearch Deployment / Zookeeper Deployment. Choose a name for this StorageClass. Marked above with "<STORAGECLASS_PERSISTENCE>"
d) Please create another StorageClass with "reclaimPolicy: Retain" + ReadWriteMany(RWX) access mode and + Encrypted (optional), this will be used for Zookeeper, ElasticSearch Backup.
2. Download configuration files
Please download veridiumid-saas-3.6.0.zip then unpack it by executing:
wget --user <NEXUS_USER> --password <NEXUS_PWD> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/veridiumid-saas-3.6.0.zip
unzip -o veridiumid-saas-3.6.0.zip -d 3.6.0
cd 3.6.0
Next customize the namespace and Cassandra datacenter name in configuration files:
sed -i 's|<NAMESPACE>|YOUR_NAMESPACE|g' values/*.yaml
# example: sed -i 's|<NAMESPACE>|dev1|g' values/*.yaml
sed -i 's|<DATACENTER_NAME>|YOUR_DATACENTER_NAME|g' values/*.yaml
# example: sed -i 's|<DATACENTER_NAME>|dc1|g' values/*.yaml
sed -i 's|<ENV_NO>|YOUR_ENV_NUMBER|g' values/*.yaml
# example: sed -i 's|<ENV_NO>|1|g' values/*.yaml
sed -i 's|<STORAGECLASS_PERSISTENCE>|YOUR_STORAGE_CLASS_ENCRYPTED|g' values/*.yaml
# example: sed -i 's|<STORAGECLASS_PERSISTENCE>|storage-encrypted|g' values/*.yaml
sed -i 's|<STORAGECLASS_BACKUPS>|STORAGE_CLASS_USED_FOR_BACKUPS|g' values/*.yaml
# example: sed -i 's|<STORAGECLASS_BACKUPS>|efs-sc|g' values/*.yaml
3. Before installation
3.1. Create a Secret with name "vid-credentials-ecr" with Username/Password in order to have access to your Private Docker Registry to pull images
In "default" namespace you need to create a "Secret" with exact name "vid-credentials-ecr".
oc create secret docker-registry vid-credentials-ecr --docker-server=<DOCKER_REGISTRY_SERVER> --docker-username=<DOCKER_USER> --docker-password=<DOCKER_PASSWORD> --docker-email=<DOCKER_EMAIL>
# example: oc create secret docker-registry vid-credentials-ecr --docker-server=registryhub.myregistry.com --docker-username=dockeruser --docker-password=dockerpass --docker-email=docker@myenv.com -n mynamespace
Replace our ECR Registry (018397616607.dkr.ecr.eu-central-1.amazonaws.com) with your registry in all files from veridium-containters folder.
sed -i 's|018397616607.dkr.ecr.eu-central-1.amazonaws.com|YOUR_REGISTRY_URL|g' values/*.yaml
3.2. Install Custom Resource Definitions for ECK operator
oc create -f values/veridiumid-crds/eck-operator-crds.yaml
# validate:
oc get crd | grep elastic
#output will be:
agents.agent.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
apmservers.apm.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
beats.beat.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
elasticmapsservers.maps.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
elasticsearchautoscalers.autoscaling.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
elasticsearches.elasticsearch.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
enterprisesearches.enterprisesearch.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
kibanas.kibana.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
stackconfigpolicies.stackconfigpolicy.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
3.3. Install Custom Resource Definitions for ECK Zookeeper
oc create -f values/veridiumid-crds/zookeeper.yaml
# validate:
oc get crd | grep zookeeper
#output will be:
zookeeperclusters.zookeeper.pravega.io XXXX-XXXX-XXTXX:XX:XXZ
3.4. Install Custom Resource Definitions for K8ssandra Operator
oc create -f values/veridiumid-crds/k8ssandra-operator.yaml
oc get crd | grep k8ssandra
#output will be:
cassandrabackups.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
cassandrarestores.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
cassandratasks.control.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
clientconfigs.config.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
k8ssandraclusters.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
medusabackupjobs.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
medusabackups.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
medusabackupschedules.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
medusarestorejobs.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
medusatasks.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
reapers.reaper.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
replicatedsecrets.replication.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
stargates.stargate.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
3.5. Install CertManager - this is a requirement of k8ssandra-operator
If you already have CertManager installed in your Cluster, you can skip this step.
helm upgrade --install -f values/cert-manager-values.yaml cert-manager helm/cert-manager-v1.12.3.tgz --set installCRDs=true
4. Namespace where you want to install VeridiumID
4.1. Create a namespace
# create the namespace
oc apply -f values/namespace.yaml
4.2. Create a Secret with name "vid-credentials-ecr" with Username/Password in order to have access to your Private Docker Registry to pull images
oc create secret docker-registry vid-credentials-ecr --docker-server=<DOCKER_REGISTRY_SERVER> --docker-username=<DOCKER_USER> --docker-password=<DOCKER_PASSWORD> --docker-email=<DOCKER_EMAIL> -n <NAMESPACE>
4.3. Create a PersistentVolumeClaim that will be used for backing up Zookeeper/ElasticSearch
oc -n <NAMESPACE> apply -f values/pvc-backups.yaml
5. Deploy persistence charts
5.1. Cassandra
Edit the values/k8ssandra-values.yaml
file and set the following values:
Property | Description | Recommended value |
---|---|---|
reaper.autoScheduling.enabled | Enable or disable the reaper to scheduling the repair/compact | Default: true |
medusa.enable | If true, Medusa container will be launched. | false |
medusa.storageProperties.podStorage.storageClassName | The type of storage to be provisioned for storing Medusa data | <STORAGECLASS_PERSISTENCE> |
medusa.storageProperties.podStorage.size | The size of the disks where Medusa data will be stored | 100Gi |
medusa.storageProperties.maxBackupCount | Maximum number of backups to keep (used by the purge process). | 2 |
medusa.storageProperties.maxBackupAge | Maximum backup age that the purge process should observe. | 0 |
medusa_backup.enable | Enable or disable backup Cassandra using medusa | Default: true |
medusa_backup.MedusaBackupSchedule.cronSchedule | Schedule when Medusa Backup will run | Default: 0 0 * * * |
medusa_backup.MedusaBackupSchedulePurge.schedule | Schedule when purge job will run | Default: 0 1 * * * |
cassandra.clusterName | Cassandra ClusterName | cassandra-<ENV_NO> |
cassandra.datacenters[].name | Name of the datacenter | dc1 |
cassandra.datacenters[].size | Number of Cassandra nodes in cluster | 3 |
cassandra.storageConfig.cassandraDataVolumeClaimSpec.storageClassName | The type of storage to be provisioned for storing Cassandra data | <STORAGECLASS_PERSISTENCE> |
cassandra.storageConfig.cassandraDataVolumeClaimSpec.resources.requests.storage | The size of the disks where Cassandra data will be stored | 40Gi |
# install the operator
helm upgrade --install --no-hooks -n <NAMESPACE> -f values/k8ssandra-operator-values.yaml k8ssandra-operator-<ENV_NO> helm/k8ssandra-operator-0.39.2.tgz
# confirm operator is running: wait until status is "Running" and Ready is "1/1"
oc -n <NAMESPACE> get pod | grep k8ssandra-operator
# create the actual cluster
helm upgrade --install -n <NAMESPACE> -f values/k8ssandra-values.yaml k8ssandra-<ENV_NO> helm/vid-k8ssandra-0.5.12.tgz
In case you need to restart the Cassandra pods, use the following command:
kubectl rollout restart statefulset cassandra-<ENV_NO>-dc1-default-sts
5.2. Elasticsearch
The default settings for Elasticsearch are displayed below. If required, they can be modified by editing values/elasticsearch-values.yaml
.
Property | Description | Recommended value |
---|---|---|
elasticsearch.storageClassName | The type of storage to be provisioned for storing Elasticsearch data | <STORAGECLASS_PERSISTENCE> |
elasticsearch.diskSize | The size of the disks where Elasticsearch data will be stored | 20Gi |
elasticsearch.replicaCount | Number of Elasticsearch nodes in cluster | 3 |
Next, execute the following: |
|
|
# install the operator
helm upgrade --install -n <NAMESPACE> -f values/eck-operator-values.yaml eck-operator-<ENV_NO> helm/eck-operator-2.1.0.tgz
# confirm operator is running: wait until status is "Running" and Ready is "1/1"
oc -n <NAMESPACE> get pod | grep eck-operator
# create the cluster
helm upgrade --install -n <NAMESPACE> -f values/elasticsearch-values.yaml elasticsearch-<ENV_NO> helm/elasticsearch-0.2.2.tgz
# check elasticsearch
oc -n <NAMESPACE> get pod | grep elasticsearch
5.3. Zookeeper
The default settings for Zookeeper are displayed below. If required, they can be modified by editing the values/zookeeper-values.yaml
file.
Property | Description | Recommended value |
---|---|---|
persistence.storageClassName | The type of storage to be provisioned for storing Zookeeper data | <STORAGECLASS_PERSISTENCE> |
persistence.volumeSize | The size of the disks where Zookeeper data will be stored | 1Gi |
persistence.reclaimPolicy | The reclaim Policy of the disks where Zookeeper data will be stored | Retain |
replicas | Number of Zookeeper nodes in cluster | 3 |
# install the operator
helm upgrade --install -n <NAMESPACE> -f values/zookeeper-operator-values.yaml zookeeper-operator-<ENV_NO> helm/zookeeper-operator-0.2.15.tgz
# create the actual cluster
helm upgrade --install -n <NAMESPACE> --timeout 30m -f values/zookeeper-values.yaml zookeeper-<ENV_NO> helm/zookeeper-0.2.15.tgz
# validate:
oc -n <NAMESPACE> get sts/zookeeper-<ENV_NO>
6. Configure the veridium environment by running the vid-installer chart
6.1 Set configuration variables
The installation can be configured by editing the values/vid-installer-values.yaml
file and setting values according to your environment and specific requirements.
Here is a summary of variables available:
Variable | Description |
---|---|
imagePullSecrets.name | Secret containing credentials used for pulling images from private docker registry |
cassandraHost | Comma-separated list of hostname(s) or IP(s) of Cassandra. |
zookeeperHost | Comma-separated list of hostname(s) or IP(s) of Zookeeper. Should be: "<NAMESPACE>-zookeeper-client" |
elasticsearchHost | Comma-separated list of hostname(s) or IP(s) of Elasticsearch. Should be: "<NAMESPACE>-elasticsearch-es-http" |
elasticsearchUser | Secret name where Elasticsearch keeps the username/password. Should be: "elasticsearch-<ENV_NO>-es-elastic-user" |
cassandraDatacenter | Name of datacenter as defined in the Cassandra cluster (default "dc1") |
cassandraReplicationType | Replication type as defined in the Cassandra cluster (default "NETWORK") |
cassandraReplicationFactor | Replication factor as defined in the Cassandra cluster (default 3) |
cassandraConsistencyLevel | Consistency leve as defined in the Cassandra cluster (default "LOCAL_QUORUM") |
domainName | Domain where this installation will be accessible |
environmentName | Name of this installation. This will be the subdomain. So this installation will expect ingress traffic at "environmentName.domainName" |
sniEnabled | If true, the services will be exposed under dedicated server names, (e.g. admin-my-env.domain.com, ssp-my-env.domain.com) |
ssl.generateCert | If true, a self-signed certificate will be generated |
ssl.defaultCert | base64 encoded string of a PEM certificate. |
ssl.defaultKey | base64 encoded string of a PEM key. |
6.2 Start the installer
helm install -n <NAMESPACE> --timeout 60m -f values/vid-installer-values.yaml vid-installer helm/vid-installer-0.5.12.tgz
This will start the installation process.
Please allow up to 60 minutes to complete.
To monitor the progress, you can use the following command to tail the logs in a separate terminal window:
oc logs -n <NAMESPACE> --follow $(oc get pod -n <NAMESPACE> --no-headers -o custom-columns=":metadata.name" | grep vid-installer | grep -v vid-installer-post)
7. Deploy webapps
7.1. Configuration of each subsystem
The values/veridiumid-values.yaml
file contains configuration properties for each subsystem of VeridiumID. Set the following properties as needed:
Variable | Description |
---|---|
logrotate_conf.*.size | The size in MB for the app log when will be rotated |
logrotate_conf.*.rotate | How many copy of log rotation to keep |
*.replicaCount | Number of pods for that particular subsystem. This can be easily changed at a later time. |
vid-haproxy.service.nodePort | The designated port for HAProxy to listen on. This will need to be within the 30000-32767 range, considering it is being exposed as NodePort. |
vid-haproxy.service.additionalNodePorts | For "ports"-type installation, the designated port for each subsystem. Default values are: websecadmin: 32102 |
vid-shibboleth.sessionTimeout | Expressed in seconds. Default: 5 |
vid-maintenance.backup.zookeeper.enable | Enable the Zookeeper backup using VeridiumID script |
vid-maintenance.backup.zookeeper.keep | How many backups to keep |
vid-maintenance.backup.zookeeper.schedule | When the cron will run to perform backup for zookeeper |
vid-maintenance.backup.elasticsearch.enable | Enable the ElasticSearch backup using VeridiumID script |
vid-maintenance.backup.elasticsearch.keep | How many backups of ElasticSearch to keep |
vid-maintenance.backup.elasticsearch.schedule | When the cron will run to perform backup for zookeeper |
vid-maintenance.backup.backup.elasticsearch_backup_with_policy.enable | Enable ElasticSearch backup with Policy |
vid-maintenance.backup.backup.elasticsearch_backup_with_policy.schedule | When to take Elasticsearch snapshots. For example: cron schedule that triggers every day at noon: 0 0 12 * * ? |
vid-maintenance.backup.backup.elasticsearch_backup_with_policy.expire_after | How many days to keep the Elasticsearch snaphots. Be default is 30 days. |
vid-maintenance.backup.backup.elasticsearch_backup_with_policy.min_count | Minim Elasticsearch snapshots retention regardless of age. By default is 5. |
vid-maintenance.backup.backup.elasticsearch_backup_with_policy.max_count | Maxim Elasticsearch snapshots retention regardless of age. By default is 50. |
vid-maintenance.backup.cassandraHost | Comma-separated list of hostname(s) or IP(s) of Cassandra. |
vid-maintenance.backup.zookeeperHost | Comma-separated list of hostname(s) or IP(s) of Zookeeper. Should be: "<NAMESPACE>-zookeeper-client" |
vid-maintenance.backup.elasticsearchHost | Comma-separated list of hostname(s) or IP(s) of Elasticsearch. Should be: "<NAMESPACE>-elasticsearch-es-http" |
vid-maintenance.backup.elasticsearchUser | Secret name where Elasticsearch keeps the username/password. Should be: "elasticsearch-<ENV_NO>-es-elastic-user" |
7.2. Launching the web applications
helm upgrade --install -n <NAMESPACE> -f values/veridiumid-values.yaml veridiumid helm/veridiumid-0.5.12.tgz
7.3. Enable Cassandra encryption (optional)
First, generate certificates for Cassandra by running the following commands:
maintenance_pod=$(oc -n <NAMESPACE> get pods -o custom-columns=POD:.metadata.name --no-headers --sort-by=.metadata.creationTimestamp | grep vid-maintenance | tail -1)
oc -n <NAMESPACE> exec $maintenance_pod -- bash -c "bash /opt/veridiumid/migration/bin/migration.sh -d /tmp/zk && mkdir -p /tmp/k8ss_tmp && cp /tmp/zk/config.json /tmp/k8ss_tmp/ && cd /tmp/k8ss_tmp && bash /scripts/generate-cassandra-keystore.sh && mv config.json /tmp/zk/ && bash /opt/veridiumid/migration/bin/migration.sh -u /tmp/zk"
Next, edit the values/k8ssandra-values.yaml
and set encryption.enabled
to true
. Next, update the K8ssandra chart by executing:
helm upgrade --install -n <NAMESPACE> -f values/k8ssandra-values.yaml k8ssandra-<ENV_NO> helm/vid-k8ssandra-0.5.12.tgz
Next, force-restart the Cassandra pods and wait for them to become ready:
oc -n <NAMESPACE> delete pod cassandra-<ENV_NO>-dc1-default-sts-0
oc -n <NAMESPACE> delete pod cassandra-<ENV_NO>-dc1-default-sts-1
oc -n <NAMESPACE> delete pod cassandra-<ENV_NO>-dc1-default-sts-2
oc -n <NAMESPACE> rollout status statefulset cassandra-<ENV_NO>-dc1-default-sts --timeout=600s
Finally, restart Veridium pods:
oc -n <NAMESPACE> rollout restart deployment vid-adservice vid-fido vid-freeradius vid-opa vid-shibboleth vid-ssp vid-websec vid-websecadmin
8. Update HAProxy certificate
The installation uses a self-signed certificate for HAProxy. We recommend using a trusted certificate for production purposes.
Please use a PEM file that includes the private key, certificate and full CA chain.
In order to update the certificate please execute the following commands:
# updates the secret that contains the certificate
oc get secret -n <NAMESPACE> vid-haproxy-certs -o yaml | sed "s,$(oc get secret -n <NAMESPACE> vid-haproxy-certs -ojsonpath='{.data.server\.pem}'),$(base64 -w 0 <FULL_PATH_TO_PEM_CERTIFICATE>),g" | oc apply -f -
# restarts HAProxy in order to load the new certificate from the secret
oc rollout restart -n <NAMESPACE> daemonset/vid-haproxy
8.1 Copy the certificate to WebsecAdmin (OPTIONAL)
In case you need WebsecAdmin to use the same certificate as HAProxy (for instance when exposing WebsecAdmin directly withouh proxying through HAProxy), connect to the vid-maintenance pod and execute the following:
bash /scripts/update-websecadmin-certificate-secret.sh
9. Configure websecadmin
Please follow the configuration wizard that is presented when first launching WebsecAdmin. Additional instructions can be found at this link.
Once the configuration is complete, please go to
Settings - Certificates - Truststores
, click onAdd Truststore
and upload the<FULL_PATH_TO_PEM_CERTIFICATE>
file.To configure Veridium as identity provider, please go to
Settings - Services - SSP
, click onConfigure Veridium as IDP
in the right-side panel.
Uninstall procedure
#
# WARNING: THIS ACTION IS IRREVERSIBLE AND RESULTS IN LOSS OF DATA !
# delete Helm managed resources"
helm -n <NAMESPACE> ls --short | xargs -L1 helm -n <NAMESPACE> delete
# delete Helm-managed resources from 'default' namespace (skip this if the cert-manager was not deployed using veridiumid)
helm delete cert-manager
# delete secrets from 'default' namespace (skip this if cert-manager was not deployed using veridiumid)
oc delete secret vid-credentials-ecr -n default
oc delete secret cert-manager-webhook-ca
# delete the namespace
oc delete -f values/namespace.yaml
# delete Custom Resource Definitions (skip this if CRDs have not been installed using veridiumid)
oc delete -f values/veridiumid-crds/eck-operator-crds.yaml
oc delete -f values/veridiumid-crds/k8ssandra-operator.yaml
oc delete -f values/veridiumid-crds/zookeeper.yaml
Known issues
On some occasions an error similar to the following could be encountered:
"Internal error occurred: failed calling webhook "kb.io - This is a premium name ": Post "https://k8ssandra-operator-<ENV_NO>-cass-operator-webhook-service.<NAMESPACE>.svc:443/validate-cassandra-datastax-com-v1beta1-cassandradatacenter?timeout=10s": Address is not allowed"
To fix it, run the following command then retry:
oc delete ValidatingWebhookConfiguration k8ssandra-operator-<ENV_NO>-cass-operator-validating-webhook-configuration k8ssandra-operator-<ENV_NO>-validating-webhook-configuration
Stuck in deleting CRD (oc get crd -oname | grep K8ssandra | xargs oc delete)
oc get crd | grep k8ssandra
oc patch crd/k8ssandraclusters.k8ssandra.io -p '{"metadata":{"finalizers":[]}}' --type=merge
Elasticsearch crashes with the following error: "Unable to access 'path.repo' (/mnt/backups/elasticsearch-backup)"
Make sure the folder exists and is accessible for read/write, as described in 1.2.e.
Maintenance operations
Changing HAProxy configuration
HAProxy configuration is stored in a configmap named vid-haproxy-config in the deployment namespace.
# edit the configuration
oc edit configmap -n <NAMESPACE> vid-haproxy-config
# restart HAProxy
oc rollout restart -n <NAMESPACE> daemonset/vid-haproxy
Restarting a malfunctioning pod
Most pods have liveness probes in place that cause automatic restart in case of malfunction.
However, if a manual restart is necessary, please execute the following command:
oc rollout restart -n <NAMESPACE> deployment/<NAME_OF_DEPLOYMENT>