Skip to main content
Skip table of contents

II. Installation

1. Prerequisites

1.1. Client machine

Please install the following software on the machine that will be used to deploy VeridiumID:

1.2. Cluster

a) To ensure sufficient capacity and performance, the recommended minimum configuration for the cluster is to have three worker nodes with a combined total of at least 24 CPUs and 48GB of RAM. Each worker node needs to provide a minimum of 100GB local storage, in addition to an external volume capacity (EBS or similar) of 500GB that is necessary for data persistence. Please note that the resources allocated to the master nodes should be separate from the ones mentioned above.

b) About Operators: In order to deploy zookeeper, elasticsearch, cassandra in the namespace an operator will be installed for each one and the watchNamespace will be only the namespace created above. Please be sure you don't have an external operator to watch the namespace where the application will be installed.

c) Please create a StorageClass with "reclaimPolicy: Retain" + ReadWriteOnce(RWO) access mode + Encrypted (optional), this will be used for Cassandra Deployment / ElasticSearch Deployment / Zookeeper Deployment. Choose a name for this StorageClass. Marked above with "<STORAGECLASS_PERSISTENCE>"

d) Please create another StorageClass with "reclaimPolicy: Retain" + ReadWriteMany(RWX) access mode and + Encrypted (optional), this will be used for Zookeeper, ElasticSearch Backup.

2. Download configuration files

Please download veridiumid-containers-3.6.0.zip then unpack it by executing:

CODE
wget --user <NEXUS_USER> --password <NEXUS_PWD> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/veridiumid-containers-3.6.0.zip
unzip veridiumid-containers-3.6.0.zip

Next customize the namespace and Cassandra datacenter name in configuration files:

CODE
sed -i 's|<NAMESPACE>|YOUR_NAMESPACE|g' veridiumid-containers/*.yaml
# example: sed -i 's|<NAMESPACE>|dev1|g' veridiumid-containers/*.yaml
sed -i 's|<DATACENTER_NAME>|YOUR_DATACENTER_NAME|g' veridiumid-containers/*.yaml
# example: sed -i 's|<DATACENTER_NAME>|dc1|g' veridiumid-containers/*.yaml
sed -i 's|<ENV_NO>|YOUR_ENV_NUMBER|g' veridiumid-containers/*.yaml
# example: sed -i 's|<ENV_NO>|1|g' veridiumid-containers/*.yaml
sed -i 's|<STORAGECLASS_PERSISTENCE>|YOUR_STORAGE_CLASS_ENCRYPTED|g' veridiumid-containers/*.yaml
# example: sed -i 's|<STORAGECLASS_PERSISTENCE>|storage-encrypted|g' veridiumid-containers/*.yaml
sed -i 's|<STORAGECLASS_BACKUPS>|STORAGE_CLASS_USED_FOR_BACKUPS|g' veridiumid-containers/*.yaml
# example: sed -i 's|<STORAGECLASS_BACKUPS>|efs-sc|g' veridiumid-containers/*.yaml

3. Before installation

3.1. Create a Secret with name "vid-credentials-ecr" with Username/Password in order to have access to your Private Docker Registry to pull images

In "default" namespace you need to create a "Secret" with exact name "vid-credentials-ecr".

CODE
oc create secret docker-registry vid-credentials-ecr --docker-server=<DOCKER_REGISTRY_SERVER> --docker-username=<DOCKER_USER> --docker-password=<DOCKER_PASSWORD> --docker-email=<DOCKER_EMAIL>
# example: oc create secret docker-registry vid-credentials-ecr --docker-server=registryhub.myregistry.com --docker-username=dockeruser --docker-password=dockerpass --docker-email=docker@myenv.com -n mynamespace

Replace our ECR Registry (018397616607.dkr.ecr.eu-central-1.amazonaws.com) with your registry in all files from veridium-containters folder.

CODE
sed -i 's|018397616607.dkr.ecr.eu-central-1.amazonaws.com|YOUR_REGISTRY_URL|g' veridiumid-containers/*.yaml

3.2. Install Custom Resource Definitions for ECK operator

CODE
oc create -f veridiumid-containers/veridiumid-crds/eck-operator-crds.yaml

# validate:
oc get crd | grep elastic

#output will be:
agents.agent.k8s.elastic.co                            XXXX-XXXX-XXTXX:XX:XXZ
apmservers.apm.k8s.elastic.co                          XXXX-XXXX-XXTXX:XX:XXZ
beats.beat.k8s.elastic.co                              XXXX-XXXX-XXTXX:XX:XXZ
elasticmapsservers.maps.k8s.elastic.co                 XXXX-XXXX-XXTXX:XX:XXZ
elasticsearchautoscalers.autoscaling.k8s.elastic.co    XXXX-XXXX-XXTXX:XX:XXZ
elasticsearches.elasticsearch.k8s.elastic.co           XXXX-XXXX-XXTXX:XX:XXZ
enterprisesearches.enterprisesearch.k8s.elastic.co     XXXX-XXXX-XXTXX:XX:XXZ
kibanas.kibana.k8s.elastic.co                          XXXX-XXXX-XXTXX:XX:XXZ
stackconfigpolicies.stackconfigpolicy.k8s.elastic.co   XXXX-XXXX-XXTXX:XX:XXZ

3.3. Install Custom Resource Definitions for ECK Zookeeper

CODE
oc create -f veridiumid-containers/veridiumid-crds/zookeeper.yaml

# validate:
oc get crd | grep zookeeper

#output will be:
zookeeperclusters.zookeeper.pravega.io                 XXXX-XXXX-XXTXX:XX:XXZ

3.4. Install Custom Resource Definitions for K8ssandra Operator

CODE
oc create -f veridiumid-containers/veridiumid-crds/k8ssandra-operator.yaml

oc get crd | grep k8ssandra
#output will be:
cassandrabackups.medusa.k8ssandra.io                   XXXX-XXXX-XXTXX:XX:XXZ
cassandrarestores.medusa.k8ssandra.io                  XXXX-XXXX-XXTXX:XX:XXZ
cassandratasks.control.k8ssandra.io                    XXXX-XXXX-XXTXX:XX:XXZ
clientconfigs.config.k8ssandra.io                      XXXX-XXXX-XXTXX:XX:XXZ
k8ssandraclusters.k8ssandra.io                         XXXX-XXXX-XXTXX:XX:XXZ
medusabackupjobs.medusa.k8ssandra.io                   XXXX-XXXX-XXTXX:XX:XXZ
medusabackups.medusa.k8ssandra.io                      XXXX-XXXX-XXTXX:XX:XXZ
medusabackupschedules.medusa.k8ssandra.io              XXXX-XXXX-XXTXX:XX:XXZ
medusarestorejobs.medusa.k8ssandra.io                  XXXX-XXXX-XXTXX:XX:XXZ
medusatasks.medusa.k8ssandra.io                        XXXX-XXXX-XXTXX:XX:XXZ
reapers.reaper.k8ssandra.io                            XXXX-XXXX-XXTXX:XX:XXZ
replicatedsecrets.replication.k8ssandra.io             XXXX-XXXX-XXTXX:XX:XXZ
stargates.stargate.k8ssandra.io                        XXXX-XXXX-XXTXX:XX:XXZ

3.5. Install CertManager - this is a requirement of k8ssandra-operator

If you already have CertManager installed in your Cluster, you can skip this step.

CODE
helm upgrade --install --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/cert-manager-values.yaml cert-manager https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/cert-manager-v1.12.3.tgz --set installCRDs=true

4. Namespace where you want to install VeridiumID

4.1. Create a namespace

CODE
# create the namespace
oc apply -f veridiumid-containers/namespace.yaml

4.2. Create a Secret with name "vid-credentials-ecr" with Username/Password in order to have access to your Private Docker Registry to pull images

CODE
oc create secret docker-registry vid-credentials-ecr --docker-server=<DOCKER_REGISTRY_SERVER> --docker-username=<DOCKER_USER> --docker-password=<DOCKER_PASSWORD> --docker-email=<DOCKER_EMAIL> -n <NAMESPACE>

4.3. Create a PersistentVolumeClaim that will be used for backing up Zookeeper/ElasticSearch

CODE
oc -n <NAMESPACE> apply -f veridiumid-containers/pvc-backups.yaml

5. Deploy persistence charts

5.1. Cassandra

Edit the veridiumid-containers/k8ssandra-values.yaml file and set the following values:

Property

Description

Recommended value

reaper.autoScheduling.enabled

Enable or disable the reaper to scheduling the repair/compact

Default: true

medusa.enable

If true, Medusa container will be launched.

false

medusa.storageProperties.podStorage.storageClassName

The type of storage to be provisioned for storing Medusa data

<STORAGECLASS_PERSISTENCE>

medusa.storageProperties.podStorage.size

The size of the disks where Medusa data will be stored

100Gi

medusa.storageProperties.maxBackupCount

Maximum number of backups to keep (used by the purge process).

2

medusa.storageProperties.maxBackupAge

Maximum backup age that the purge process should observe.

0

medusa_backup.enable

Enable or disable backup Cassandra using medusa

Default: true

medusa_backup.MedusaBackupSchedule.cronSchedule

Schedule when Medusa Backup will run

Default: 0 0 * * *

medusa_backup.MedusaBackupSchedulePurge.schedule

Schedule when purge job will run

Default: 0 1 * * *

cassandra.clusterName

Cassandra ClusterName

cassandra-<ENV_NO>

cassandra.datacenter1.name

Name of the datacenter

dc1

cassandra.datacenter1.size

Number of Cassandra nodes in cluster

3

cassandra.datacenter1.storageConfig.cassandraDataVolumeClaimSpec.storageClassName

The type of storage to be provisioned for storing Cassandra data

<STORAGECLASS_PERSISTENCE>

cassandra.datacenter1.storageConfig.cassandraDataVolumeClaimSpec.resources.requests.storage

The size of the disks where Cassandra data will be stored

40Gi

CODE
# install the operator
helm upgrade --install --no-hooks -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/k8ssandra-operator-values.yaml k8ssandra-operator-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/k8ssandra-operator-0.39.2.tgz

# confirm operator is running: wait until status is "Running" and Ready is "1/1"
oc -n <NAMESPACE> get pod | grep k8ssandra-operator

# create the actual cluster
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/k8ssandra-values.yaml k8ssandra-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/vid-k8ssandra-0.5.9.tgz

In case you need to restart the Cassandra pods, use the following command:
kubectl rollout restart statefulset cassandra-<ENV_NO>-dc1-default-sts

5.2. Elasticsearch

The default settings for Elasticsearch are displayed below. If required, they can be modified by editing veridiumid-containers/elasticsearch-values.yaml.

Property

Description

Recommended value

elasticsearch.storageClassName

The type of storage to be provisioned for storing Elasticsearch data

<STORAGECLASS_PERSISTENCE>

elasticsearch.diskSize

The size of the disks where Elasticsearch data will be stored

20Gi

elasticsearch.replicaCount

Number of Elasticsearch nodes in cluster

3

Next, execute the following:

CODE

# install the operator
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/eck-operator-values.yaml eck-operator-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/eck-operator-2.1.0.tgz

# confirm operator is running: wait until status is "Running" and Ready is "1/1"
oc -n <NAMESPACE> get pod | grep eck-operator

# create the cluster
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/elasticsearch-values.yaml elasticsearch-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/elasticsearch-0.2.1.tgz

# check elasticsearch
oc -n <NAMESPACE> get pod | grep elasticsearch

5.3. Zookeeper

The default settings for Zookeeper are displayed below. If required, they can be modified by editing the veridiumid-containers/zookeeper-values.yaml file.

Property

Description

Recommended value

persistence.storageClassName

The type of storage to be provisioned for storing Zookeeper data

<STORAGECLASS_PERSISTENCE>

persistence.volumeSize

The size of the disks where Zookeeper data will be stored

1Gi

persistence.reclaimPolicy

The reclaim Policy of the disks where Zookeeper data will be stored

Retain

replicas

Number of Zookeeper nodes in cluster

3

CODE
# install the operator
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/zookeeper-operator-values.yaml zookeeper-operator-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/zookeeper-operator-0.2.15.tgz

# create the actual cluster
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> --timeout 30m -f veridiumid-containers/zookeeper-values.yaml zookeeper-<ENV_NO> ttps://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/zookeeper-0.2.15.tgz

# validate:
oc -n <NAMESPACE> get sts/zookeeper-<ENV_NO>

6. Configure the veridium environment by running the vid-installer chart

6.1 Set configuration variables

The installation can be configured by editing the veridiumid-containers/vid-installer-values.yaml file and setting values according to your environment and specific requirements.
Here is a summary of variables available:

Variable

Description

imagePullSecrets.name

Secret containing credentials used for pulling images from private docker registry

cassandraHost

Comma-separated list of hostname(s) or IP(s) of Cassandra.
This value is constructed by appending the following values from k8ssandra-deployment.yaml:
"metadata.name-spec.cassandra.datacenters[].metadata.name-service"

zookeeperHost

Comma-separated list of hostname(s) or IP(s) of Zookeeper. Should be: "<NAMESPACE>-zookeeper-client"

elasticsearchHost

Comma-separated list of hostname(s) or IP(s) of Elasticsearch. Should be: "<NAMESPACE>-elasticsearch-es-http"

elasticsearchUser

Secret name where Elasticsearch keeps the username/password. Should be: "elasticsearch-<ENV_NO>-es-elastic-user"

cassandraDatacenter

Name of datacenter as defined in Cassandra

domainName

Domain where this installation will be accessible

environmentName

Name of this installation. This will be the subdomain. So this installation will expect ingress traffic at "environmentName.domainName"

sniEnabled

If true, the services will be exposed under dedicated server names, (e.g. admin-my-env.domain.com, ssp-my-env.domain.com)
If false, the services will be exposed under a single name (e.g. my-env.domain.com), but dedicated ports

ssl.generateCert

If true, a self-signed certificate will be generated
if false, the defaultCert and defaultKey will be used

ssl.defaultCert

base64 encoded string of a PEM certificate.
To generate this string execute: base64 -w 0 <FULL_PATH_TO_PEM_CERTIFICATE>

ssl.defaultKey

base64 encoded string of a PEM key.
To generate this string execute: base64 -w 0 <FULL_PATH_TO_PEM_KEY>

6.2 Start the installer

CODE
helm install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> --timeout 60m -f veridiumid-containers/vid-installer-values.yaml vid-installer https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/vid-installer-0.5.9.tgz

This will start the installation process.
Please allow up to 60 minutes to complete.

To monitor the progress, you can use the following command to tail the logs in a separate terminal window:

CODE
oc logs -n <NAMESPACE> --follow $(oc get pod -n <NAMESPACE> --no-headers -o custom-columns=":metadata.name" | grep vid-installer | grep -v vid-installer-post)

7. Deploy webapps

7.1. Configuration of each subsystem

The veridiumid-containers/veridiumid-values.yaml file contains configuration properties for each subsystem of VeridiumID. Set the following properties as needed:

Variable

Description

logrotate_conf.*.size

The size in MB for the app log when will be rotated

logrotate_conf.*.rotate

How many copy of log rotation to keep

*.replicaCount

Number of pods for that particular subsystem. This can be easily changed at a later time.
If you don't need a particular subsystem, set the replicaCount to 0. (Default value: 2)

vid-haproxy.service.nodePort

The designated port for HAProxy to listen on. This will need to be within the 30000-32767 range, considering it is being exposed as NodePort.

vid-haproxy.service.additionalNodePorts

For "ports"-type installation, the designated port for each subsystem. Default values are:

websecadmin: 32102
ssp: 32103
shibboleth-ext: 32104
shibboleth-int: 32105

vid-shibboleth.sessionTimeout

Expressed in seconds. Default: 5

vid-maintenance.backup.zookeeper.enable

Enable the Zookeeper backup using VeridiumID script

vid-maintenance.backup.zookeeper.keep

How many backups to keep

vid-maintenance.backup.zookeeper.schedule

When the cron will run to perform backup for zookeeper

vid-maintenance.backup.elasticsearch.enable

Enable the ElasticSearch backup using VeridiumID script

vid-maintenance.backup.elasticsearch.keep

How many backups of ElasticSearch to keep

vid-maintenance.backup.elasticsearch.schedule

When the cron will run to perform backup for zookeeper

vid-maintenance.backup.backup.elasticsearch_backup_with_policy.enable

Enable ElasticSearch backup with Policy

vid-maintenance.backup.backup.elasticsearch_backup_with_policy.schedule

When to take Elasticsearch snapshots. For example: cron schedule that triggers every day at noon: 0 0 12 * * ?

vid-maintenance.backup.backup.elasticsearch_backup_with_policy.expire_after

How many days to keep the Elasticsearch snaphots. Be default is 30 days.

vid-maintenance.backup.backup.elasticsearch_backup_with_policy.min_count

Minim Elasticsearch snapshots retention regardless of age. By default is 5.

vid-maintenance.backup.backup.elasticsearch_backup_with_policy.max_count

Maxim Elasticsearch snapshots retention regardless of age. By default is 50.

vid-maintenance.backup.cassandraHost

Comma-separated list of hostname(s) or IP(s) of Cassandra.
This value is constructed by appending the following values from k8ssandra-deployment.yaml:
"metadata.name-spec.cassandra.datacenters[].metadata.name-service"

vid-maintenance.backup.zookeeperHost

Comma-separated list of hostname(s) or IP(s) of Zookeeper. Should be: "<NAMESPACE>-zookeeper-client"

vid-maintenance.backup.elasticsearchHost

Comma-separated list of hostname(s) or IP(s) of Elasticsearch. Should be: "<NAMESPACE>-elasticsearch-es-http"

vid-maintenance.backup.elasticsearchUser

Secret name where Elasticsearch keeps the username/password. Should be: "elasticsearch-<ENV_NO>-es-elastic-user"

7.2. Launching the web applications

CODE
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/veridiumid-values.yaml veridiumid https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/veridiumid-0.5.9.tgz

7.3. Enable Cassandra encryption (optional)

First, generate certificates for Cassandra by running the following commands:

CODE
maintenance_pod=$(oc -n <NAMESPACE> get pods -o custom-columns=POD:.metadata.name --no-headers --sort-by=.metadata.creationTimestamp | grep vid-maintenance | tail -1)

oc -n <NAMESPACE> exec $maintenance_pod -- bash -c "bash /opt/veridiumid/migration/bin/migration.sh -d /tmp/zk && mkdir -p /tmp/k8ss_tmp && cp /tmp/zk/config.json /tmp/k8ss_tmp/ && cd /tmp/k8ss_tmp && bash /scripts/generate-cassandra-keystore.sh && mv config.json /tmp/zk/ && bash /opt/veridiumid/migration/bin/migration.sh -u /tmp/zk"

Next, edit the veridiumid-containers/k8ssandra-values.yaml and set encryption.enabled to true. Next, update the K8ssandra chart by executing:

CODE
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/k8ssandra-values.yaml k8ssandra-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.6.0/vid-k8ssandra-0.5.9.tgz

Next, force-restart the Cassandra pods and wait for them to become ready:

CODE
oc -n <NAMESPACE> delete pod cassandra-<ENV_NO>-dc1-default-sts-0
oc -n <NAMESPACE> delete pod cassandra-<ENV_NO>-dc1-default-sts-1
oc -n <NAMESPACE> delete pod cassandra-<ENV_NO>-dc1-default-sts-2
oc -n <NAMESPACE> rollout status statefulset cassandra-<ENV_NO>-dc1-default-sts --timeout=600s

Finally, restart Veridium pods:

CODE
oc -n <NAMESPACE> rollout restart deployment vid-adservice vid-fido vid-freeradius vid-opa vid-shibboleth vid-ssp vid-websec vid-websecadmin

8. Update HAProxy certificate

The installation uses a self-signed certificate for HAProxy. We recommend using a trusted certificate for production purposes.

Please use a PEM file that includes the private key, certificate and full CA chain.

In order to update the certificate please execute the following commands:

CODE
# updates the secret that contains the certificate
oc get secret -n <NAMESPACE> vid-haproxy-certs -o yaml | sed "s,$(oc get secret -n <NAMESPACE> vid-haproxy-certs -ojsonpath='{.data.server\.pem}'),$(base64 -w 0 <FULL_PATH_TO_PEM_CERTIFICATE>),g" | oc apply -f -
# restarts HAProxy in order to load the new certificate from the secret
oc rollout restart -n <NAMESPACE> daemonset/vid-haproxy 

9. Configure websecadmin

  • Please follow the configuration wizard that is presented when first launching WebsecAdmin. Additional instructions can be found at this link.

  • Once the configuration is complete, please go to Settings - Certificates - Truststores, click on Add Truststore and upload the <FULL_PATH_TO_PEM_CERTIFICATE> file.

  • To configure Veridium as identity provider, please go to Settings - Services - SSP, click on Configure Veridium as IDP in the right-side panel.

10. Test deployment

This will execute http calls to all services previously deployed in order to quickly identify potential startup issues.

CODE
helm -n <NAMESPACE> test veridiumid

Uninstall procedure

CODE
#
# WARNING: THIS ACTION IS IRREVERSIBLE AND RESULTS IN LOSS OF DATA !

# delete Helm managed resources"
helm -n <NAMESPACE> ls --short | xargs -L1 helm -n <NAMESPACE> delete

# delete Helm-managed resources from 'default' namespace (skip this if the cert-manager was not deployed using veridiumid)
helm delete cert-manager

# delete secrets from 'default' namespace (skip this if cert-manager was not deployed using veridiumid)
oc delete secret vid-credentials-ecr -n default
oc delete secret cert-manager-webhook-ca

# delete the namespace
oc delete -f veridiumid-containers/namespace.yaml

# delete Custom Resource Definitions (skip this if CRDs have not been installed using veridiumid)
oc delete -f veridiumid-containers/veridiumid-crds/eck-operator-crds.yaml
oc delete -f veridiumid-containers/veridiumid-crds/k8ssandra-operator.yaml
oc delete -f veridiumid-containers/veridiumid-crds/zookeeper.yaml

Known issues

  • On some occasions an error similar to the following could be encountered:
    "Internal error occurred: failed calling webhook "http://vcassandradatacenter.kb.io ": Post "https://k8ssandra-operator-<ENV_NO>-cass-operator-webhook-service.<NAMESPACE>.svc:443/validate-cassandra-datastax-com-v1beta1-cassandradatacenter?timeout=10s": Address is not allowed"
    To fix it, run the following command then retry:

CODE
oc delete ValidatingWebhookConfiguration k8ssandra-operator-<ENV_NO>-cass-operator-validating-webhook-configuration k8ssandra-operator-<ENV_NO>-validating-webhook-configuration
CODE
oc get crd | grep k8ssandra
oc patch crd/k8ssandraclusters.k8ssandra.io -p '{"metadata":{"finalizers":[]}}' --type=merge
  • Elasticsearch crashes with the following error: "Unable to access 'path.repo' (/mnt/backups/elasticsearch-backup)"

    Make sure the folder exists and is accessible for read/write, as described in 1.2.e.

Maintenance operations

Changing HAProxy configuration

HAProxy configuration is stored in a configmap named vid-haproxy-config in the deployment namespace.

CODE
# edit the configuration
oc edit configmap -n <NAMESPACE> vid-haproxy-config
# restart HAProxy
oc rollout restart -n <NAMESPACE> daemonset/vid-haproxy 

Restarting a malfunctioning pod

Most pods have liveness probes in place that cause automatic restart in case of malfunction.
However, if a manual restart is necessary, please execute the following command:

CODE
oc rollout restart -n <NAMESPACE> deployment/<NAME_OF_DEPLOYMENT> 
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.