Installation instructions on OpenShift
VeridiumID Containers
1. Prerequisites
1.1. Client machine
Please install the following software on the machine that will be used to deploy VeridiumID:
Helm 3. Please check your installed version by running
helm version
1.2. Cluster
To ensure sufficient capacity and performance, the recommended minimum configuration for the cluster is to have three worker nodes with a combined total of at least 24 CPUs and 48GB of RAM. Each worker node needs to provide a minimum of 100GB local storage, in addition to an external volume capacity (EBS or similar) of 500GB that is necessary for data persistence. Please note that the resources allocated to the master nodes should be separate from the ones mentioned above.
About Operators: In order to deploy zookeeper, elasticsearch, cassandra in the namespace an operator will be installed for each one and the watchNamespace will be only the namespace created above. Please be sure you don’t have an external operator to watch the namespace where the application will be installed.
Please create a StorageClass with “reclaimPolicy: Retain” + ReadWriteOnce(RWO) access mode + Encrypted (optional), this will be used for Cassandra Deployment / ElasticSearch Deployment / Zookeeper Deployment. Choose a name for this StorageClass. Marked above with “<STORAGECLASSPERSISTENCE>”
Please create another StorageClass with “reclaimPolicy: Retain” + ReadWriteMany(RWX) access mode and + Encrypted (optional), this will be used for Zookeeper, ElasticSearch Backup.
For Zookeeper Backup/Elasticsearch Backup, please create a PV with name “efs-pv-veridium-backups-<NAMESPACE>” with StorageClass created above + ReadWriteMany(RWX), after that please create a PVC with name “efs-pvc-veridium-backups-<NAMESPACE>” with StorageClass created above + ReadWriteMany(RWX) in the namespace where we will perform the deployment. In this volume you need to create two folders “elasticsearch-backup” and “zookeeper-backup” with full permission (chmod).
2. Download configuration files
Please download veridiumid-containers-3.4.0.zip then unpack it by executing:
wget --user <NEXUS_USER> --password <NEXUS_PWD> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.4.0/veridiumid-containers-3.4.0.zip
unzip veridiumid-containers-3.4.0.zip
Next customize the namespace and Cassandra datacenter name in configuration files:
sed -i 's|<NAMESPACE>|YOUR_NAMESPACE|g' veridiumid-containers/*.yaml
# example: sed -i 's|<NAMESPACE>|dev1|g' veridiumid-containers/*.yaml
sed -i 's|<DATACENTER_NAME>|YOUR_DATACENTER_NAME|g' veridiumid-containers/*.yaml
# example: sed -i 's|<DATACENTER_NAME>|dc1|g' veridiumid-containers/*.yaml
sed -i 's|<ENV_NO>|YOUR_ENV_NUMBER|g' veridiumid-containers/*.yaml
# example: sed -i 's|<ENV_NO>|1|g' veridiumid-containers/*.yaml
sed -i 's|<STORAGECLASSPERSISTENCE>|YOUR_STORAGE_CLASS_ENCRYPTED|g' veridiumid-containers/*.yaml
# example: sed -i 's|<STORAGECLASSPERSISTENCE>|storage-encrypted|g' veridiumid-containers/*.yaml
3. Before installation
3.1. Create a Secret with name “vid-credentials-ecr” with Username/Password in order to have access to your Private Docker Registry to pull images
In “default” namespace you need to create a “Secret” with exact name “vid-credentials-ecr”.
oc create secret docker-registry vid-credentials-ecr --docker-server=<DOCKER_REGISTRY_SERVER> --docker-username=<DOCKER_USER> --docker-password=<DOCKER_PASSWORD> --docker-email=<DOCKER_EMAIL>
example: oc create secret docker-registry vid-credentials-ecr --docker-server=registryhub.myregistry.com --docker-username=dockeruser --docker-password=dockerpass --docker-email=docker@myenv.com -n mynamespace
Replace our ECR Registry (018397616607.dkr.ecr.eu-central-1.amazonaws.com) with your registry in all files from veridium-containters folder.
sed -i 's|018397616607.dkr.ecr.eu-central-1.amazonaws.com|YOUR_REGISTRY_URL|g' veridiumid-containers/*
3.2. Install Custom Resource Definitions for ECK operator
oc create -f veridiumid-containers/veridiumid-crds/eck-operator-crds.yaml
# validate:
oc get crd | grep elastic
#output will be:
agents.agent.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
apmservers.apm.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
beats.beat.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
elasticmapsservers.maps.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
elasticsearchautoscalers.autoscaling.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
elasticsearches.elasticsearch.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
enterprisesearches.enterprisesearch.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
kibanas.kibana.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
stackconfigpolicies.stackconfigpolicy.k8s.elastic.co XXXX-XXXX-XXTXX:XX:XXZ
3.3. Install Custom Resource Definitions for ECK Zookeeper
oc create -f veridiumid-containers/veridiumid-crds/zookeeper.yaml
# validate:
oc get crd | grep zookeeper
#output will be:
zookeeperclusters.zookeeper.pravega.io XXXX-XXXX-XXTXX:XX:XXZ
3.4. Install Custom Resource Definitions for K8ssandra Operator
oc create -f veridiumid-containers/veridiumid-crds/k8ssandra-operator.yaml
oc get crd | grep k8ssandra
#output will be:
cassandrabackups.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
cassandrarestores.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
cassandratasks.control.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
clientconfigs.config.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
k8ssandraclusters.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
medusabackupjobs.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
medusabackups.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
medusabackupschedules.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
medusarestorejobs.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
medusatasks.medusa.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
reapers.reaper.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
replicatedsecrets.replication.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
stargates.stargate.k8ssandra.io XXXX-XXXX-XXTXX:XX:XXZ
3.5. Install CertManager - this is a requirement of k8ssandra-operator
If you already have CertManager installed in your Cluster, you can skip this step.
helm upgrade --install --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/cert-manager-values.yaml cert-manager https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.4.0/cert-manager-v1.11.0.tgz --set installCRDs=true
4. Namespace where you want to install VeridiumID
4.1. Create a namespace
# create the namespace
oc apply -f veridiumid-containers/namespace.yaml
4.2. Create a Secret with name “vid-credentials-ecr” with Username/Password in order to have access to your Private Docker Registry to pull images
oc create secret docker-registry vid-credentials-ecr --docker-server=<DOCKER_REGISTRY_SERVER> --docker-username=<DOCKER_USER> --docker-password=<DOCKER_PASSWORD> --docker-email=<DOCKER_EMAIL> -n <NAMESPACE>
5. Deploy persistence charts
5.1. Elasticsearch
The default settings for Elasticsearch are displayed below. If required, they can be modified by editing veridiumid-containers/elasticsearch-values.yaml
.
Property | Description | Recommended value |
---|---|---|
elasticsearch.storageClassName | The type of storage to be provisioned for storing Elasticsearch data | <STORAGECLASSPERSISTENCE> |
elasticsearch.diskSize | The size of the disks where Elasticsearch data will be stored | 20Gi |
elasticsearch.replicaCount | Number of Elasticsearch nodes in cluster | 3 |
Next, execute the following: |
# install the operator
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/eck-operator-values.yaml eck-operator-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.4.0/eck-operator-2.1.0.tgz
# confirm operator is running: wait until status is "Running" and Ready is "1/1"
oc -n <NAMESPACE> get pod | grep eck-operator
Before deploying Elasticsearch, check the PVC "efs-pvc-veridium-backups-<NAMESPACE>" for backups. More details about it in 1.2. Cluster
# create the cluster
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/elasticsearch-values.yaml elasticsearch-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.4.0/elasticsearch-0.1.0.tgz
# patch the backup volume
oc patch statefulset elasticsearch-<ENV_NO>-es-default -p '{"spec": {"template": {"spec": {"containers": [{"name": "elasticsearch","volumeMounts": [{"name": "elasticsearch-backup","mountPath": "/mnt/backups"}]}]}}}}' -n <NAMESPACE>
5.2. Cassandra
Edit the veridiumid-containers/k8ssandra-deployment.yaml
file and set the following values:
Property | Description | Recommended value |
---|---|---|
spec.medusa.storageProperties.podStorage.storageClassName | The type of storage to be provisioned for storing Medusa data | <STORAGECLASSPERSISTENCE> |
spec.medusa.storageProperties.podStorage.size | The size of the disks where Medusa data will be stored | 100Gi |
spec.medusa.storageProperties.maxBackupCount | Maximum number of backups to keep (used by the purge process). | 2 |
spec.medusa.storageProperties.maxBackupAge | Maximum backup age that the purge process should observe. | 0 |
spec.cassandra.clusterName | Cassandra ClusterName | cassandra-<ENV_NO> |
spec.cassandra.datacenters[].storageConfig.cassandraDataVolumeClaimSpec.storageClassName | The type of storage to be provisioned for storing Cassandra data | <STORAGECLASSPERSISTENCE> |
spec.cassandra.datacenters[].storageConfig.cassandraDataVolumeClaimSpec.resources.requests.storage | The size of the disks where Cassandra data will be stored | 40Gi |
spec.cassandra.datacenters[].size | Number of Cassandra nodes in cluster | 3 |
spec.cassandra.datacenters[].metadata.name | Name of the datacenter | dc1 |
# install the operator
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/k8ssandra-operator-values.yaml k8ssandra-operator-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.4.0/k8ssandra-operator-0.39.2.tgz
# confirm operator is running: wait until status is "Running" and Ready is "1/1"
oc -n <NAMESPACE> get pod | grep k8ssandra-operator
# create the actual cluster
oc apply -n <NAMESPACE> -f veridiumid-containers/k8ssandra-deployment.yaml
# patch statefulset cassandra
oc patch statefulset cassandra-<ENV_NO>-<DATACENTER_NAME>-default-sts -p '{"spec": {"template": {"spec":{"initContainers":[{"name":"per-node-config","image":"018397616607.dkr.ecr.eu-central-1.amazonaws.com/dependencies/mikefarah/yq:4.34.1"}]}}}}' -n <NAMESPACE>
# if you use your own registry, please replace "018397616607.dkr.ecr.eu-central-1.amazonaws.com" with your url.
# validate: please wait until the 'Ready' column displays '3/3'
oc -n <NAMESPACE> get sts/cassandra-<ENV_NO>-<DATACENTER_NAME>-default-sts
5.2.1 (OPTIONAL) Setup schedule for Cassandra backup
Settings for backing up Cassandra are available in the veridiumid-containers/k8ssandra-medusa-backup.yaml
file.
If you need to customize the backup schedule for your requirements, you can modify the spec.cronSchedule
field of the MedusaBackupSchedule object. The default backup schedule is daily at midnight.
Similarly, you can change the schedule for deleting old backups by modifying the spec.schedule
field of the CronJob object. The default schedule for deletion is daily at 1AM. Next, execute the following command:
oc apply -n <NAMESPACE> -f veridiumid-containers/k8ssandra-medusa-backup.yaml
5.3. Zookeeper
The default settings for Zookeeper are displayed below. If required, they can be modified by editing the veridiumid-containers/zookeeper-values.yaml
file.
Property | Description | Recommended value |
---|---|---|
persistence.storageClassName | The type of storage to be provisioned for storing Zookeeper data | <STORAGECLASSPERSISTENCE> |
persistence.volumeSize | The size of the disks where Zookeeper data will be stored | 1Gi |
persistence.reclaimPolicy | The reclaim Policy of the disks where Zookeeper data will be stored | Retain |
replicas | Number of Zookeeper nodes in cluster | 3 |
If the deployment is on Kubernetes please modify the securityContext by editing the |
securityContext:
runAsNonRoot: false
...truncate...
allowPrivilegeEscalation: true
...truncate...
# install the operator
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/zookeeper-operator-values.yaml zookeeper-operator-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.4.0/zookeeper-operator-0.2.14.tgz
# create the actual cluster
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> --timeout 30m -f veridiumid-containers/zookeeper-values.yaml zookeeper-<ENV_NO> https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.4.0/zookeeper-0.2.14-1.tgz
# validate:
oc -n <NAMESPACE> get sts/zookeeper-<ENV_NO>
6. Configure the veridium environment by running the vid-installer chart
6.1 Set configuration variables
The installation can be configured by editing the veridiumid-containers/vid-installer-values.yaml
file and setting values according to your environment and specific requirements.
Here is a summary of variables available:
Variable | Description |
---|---|
Secret containing credentials used for pulling images from private docker registry | |
cassandraHost | Comma-separated list of hostname(s) or IP(s) of Cassandra.<br>This value is constructed by appending the following values from k8ssandra-deployment.yaml:<br> “ |
zookeeperHost | Comma-separated list of hostname(s) or IP(s) of Zookeeper. Should be: “<NAMESPACE>-zookeeper-client” |
elasticsearchHost | Comma-separated list of hostname(s) or IP(s) of Elasticsearch. Should be: “<NAMESPACE>-elasticsearch-es-http” |
elasticsearchUser | Secret name where Elasticsearch keeps the username/password. Should be: “elasticsearch-<ENV_NO>-es-elastic-user” |
cassandraDatacenter | Name of datacenter as defined in Cassandra |
domainName | Domain where this installation will be accessible |
environmentName | Name of this installation. This will be the subdomain. So this installation will expect ingress traffic at “environmentName.domainName” |
sniEnabled | If true, the services will be exposed under dedicated server names, (e.g. admin-my-env.domain.com, ssp-my-env.domain.com)<br>If false, the services will be exposed under a single name (e.g. my-env.domain.com), but dedicated ports |
ssl.generateCert | If true, a self-signed certificate will be generated<br>if false, the defaultCert and defaultKey will be used |
ssl.defaultCert | base64 encoded string of a PEM certificate.<br>To generate this string execute: |
ssl.defaultKey | base64 encoded string of a PEM key.<br>To generate this string execute: |
6.2 Start the installer
helm install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> --timeout 60m -f veridiumid-containers/vid-installer-values.yaml vid-installer https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.4.0/vid-installer-0.2.10.tgz
This will start the installation process.
Please allow up to 60 minutes to complete.
To monitor the progress, you can use the following command to tail the logs in a separate terminal window:
oc logs -n <NAMESPACE> --follow $(oc get pod -n <NAMESPACE> --no-headers -o custom-columns=":metadata.name" | grep vid-installer | grep -v vid-installer-post)
7. Deploy webapps
7.1. Configuration of each subsystem
The veridiumid-containers/veridiumid-values.yaml
file contains configuration properties for each subsystem of VeridiumID. Set the following properties as needed:
Variable | Description |
---|---|
logrotate_conf.*.size | The size in MB for the app log when will be rotated |
logrotate_conf.*.rotate | How many copy of log rotation to keep |
*.replicaCount | Number of pods for that particular subsystem. This can be easily changed at a later time.<br>If you don’t need a particular subsystem, set the replicaCount to 0. (Default value: 2) |
vid-haproxy.service.nodePort | The designated port for HAProxy to listen on. This will need to be within the 30000-32767 range, considering it is being exposed as NodePort. |
vid-haproxy.service.additionalNodePorts | For “ports”-type installation, the designated port for each subsystem. Default values are: <br><br>websecadmin: 32102<br>ssp: 32103<br>shibboleth-ext: 32104<br>shibboleth-int: 32105 |
vid-shibboleth.sessionTimeout | Expressed in seconds. Default: 5 |
vid-maintenance.backup.zookeeper.enable | Enable the Zookeeper backup using VeridiumID script |
vid-maintenance.backup.zookeeper.keep | How many backups to keep |
vid-maintenance.backup.zookeeper.schedule | When the cron will run to perform backup for zookeeper |
vid-maintenance.backup.elasticsearch.enable | Enable the ElasticSearch backup using VeridiumID script |
vid-maintenance.backup.elasticsearch.keep | How many backups of ElasticSearch to keep |
vid-maintenance.backup.elasticsearch.schedule | When the cron will run to perform backup for zookeeper |
vid-maintenance.backup.backup.elasticsearch_backup_with_policy.enable | Enable ElasticSearch backup with Policy |
vid-maintenance.backup.backup.elasticsearch_backup_with_policy.schedule | When to take Elasticsearch snapshots. For example: cron schedule that triggers every day at noon: 0 0 12 * * ? |
vid-maintenance.backup.backup.elasticsearch_backup_with_policy.expire_after | How many days to keep the Elasticsearch snaphots. Be default is 30 days. |
vid-maintenance.backup.backup.elasticsearch_backup_with_policy.min_count | Minim Elasticsearch snapshots retention regardless of age. By default is 5. |
vid-maintenance.backup.backup.elasticsearch_backup_with_policy.max_count | Maxim Elasticsearch snapshots retention regardless of age. By default is 50. |
vid-maintenance.backup.cassandraHost | Comma-separated list of hostname(s) or IP(s) of Cassandra.<br>This value is constructed by appending the following values from k8ssandra-deployment.yaml:<br> “ |
vid-maintenance.backup.zookeeperHost | Comma-separated list of hostname(s) or IP(s) of Zookeeper. Should be: “<NAMESPACE>-zookeeper-client” |
vid-maintenance.backup.elasticsearchHost | Comma-separated list of hostname(s) or IP(s) of Elasticsearch. Should be: “<NAMESPACE>-elasticsearch-es-http” |
vid-maintenance.backup.elasticsearchUser | Secret name where Elasticsearch keeps the username/password. Should be: “elasticsearch-<ENV_NO>-es-elastic-user” |
7.2. Launching the web applications
helm upgrade --install -n <NAMESPACE> --username <NEXUS_USER> --password <NEXUS_PWD> -f veridiumid-containers/veridiumid-values.yaml veridiumid https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.4.0/veridiumid-0.2.10.tgz
8. Update HAProxy certificate
The installation uses a self-signed certificate for HAProxy. We recommend using a trusted certificate for production purposes.
Please use a PEM file that includes the private key, certificate and full CA chain.
In order to update the certificate please execute the following commands:
# updates the secret that contains the certificate
oc get secret -n <NAMESPACE> vid-haproxy-certs -o yaml | sed "s,$(oc get secret -n <NAMESPACE> vid-haproxy-certs -ojsonpath='{.data.server\.pem}'),$(base64 -w 0 <FULL_PATH_TO_PEM_CERTIFICATE>),g" | oc apply -f -
# restarts HAProxy in order to load the new certificate from the secret
oc rollout restart -n <NAMESPACE> daemonset/vid-haproxy
9. Configure websecadmin
Please follow the configuration wizard that is presented when first launching WebsecAdmin. Additional instructions can be found at this link.
Once the configuration is complete, please go to
Settings - Certificates - Truststores
, click onAdd Truststore
and upload the<FULL_PATH_TO_PEM_CERTIFICATE>
file.To configure Veridium as identity provider, please go to
Settings - Services - SSP
, click onConfigure Veridium as IDP
in the right-side panel.
10. Test deployment
This will execute http calls to all services previously deployed in order to quickly identify potential startup issues.
helm -n <NAMESPACE> test veridiumid
Uninstall procedure
# delete the k8ssandra cluster
# WARNING: THIS ACTION IS IRREVERSIBLE AND RESULTS IN LOSS OF DATA !
oc delete -f veridiumid-containers/k8ssandra-deployment.yaml -n <NAMESPACE>
# delete Helm-managed resources from <NAMESPACE> namespace
# WARNING: THIS ACTION IS IRREVERSIBLE AND RESULTS IN LOSS OF DATA !
helm delete -n <NAMESPACE> veridiumid vid-installer eck-operator-<ENV_NO> elasticsearch-<ENV_NO> k8ssandra-operator-<ENV_NO> zookeeper-<ENV_NO> zookeeper-operator-<ENV_NO>
# delete Helm-managed resources from 'default' namespace (skip this if the cert-manager was not deployed using veridiumid)
helm delete cert-manager
# delete secrets from 'default' namespace (skip this if cert-manager was not deployed using veridiumid)
oc delete secret vid-credentials-ecr -n default
oc delete secret cert-manager-webhook-ca
# delete the namespace
oc delete -f veridiumid-containers/namespace.yaml
# delete Custom Resource Definitions (skip this if CRDs have not been installed using veridiumid)
oc delete -f veridiumid-containers/veridiumid-crds/eck-operator-crds.yaml
oc delete -f veridiumid-containers/veridiumid-crds/k8ssandra-operator.yaml
oc delete -f veridiumid-containers/veridiumid-crds/zookeeper.yaml
Known issues
On some occasions an error similar to the following could be encountered:
“Internal error occurred: failed calling webhook “vcassandradatacenter.kb.io”: Post “https://k8ssandra-operator-<ENV_NO>-cass-operator-webhook-service.<NAMESPACE>.svc:443/validate-cassandra-datastax-com-v1beta1-cassandradatacenter?timeout=10s”: Address is not allowed”
To fix it, run the following command then retry:
oc delete ValidatingWebhookConfiguration k8ssandra-operator-<ENV_NO>-cass-operator-validating-webhook-configuration k8ssandra-operator-<ENV_NO>-validating-webhook-configuration
If we want to move a persistence pod Cassandra/Zookeeper from a node to another the volume remain in status pending. And we need to do some manual action in order to move the pod to another node.
Example: In our scenario we have 3 replica for Cassandra / 3 replica for Zookeeper. Each pod is running on a different node and we want to replace one node with another. After we put the node in mentenance and add the other node in cluster we need to run some manual commands to move replica 3 to another node:
$ oc get PersistentVolume -n <NAMESPACE>
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-<NAMESPACE>-zookeeper-0 Bound pvc-fd5f658f-6fc3-421d-97b9-b953e406ea51 20Gi RWO gp2 41h
data-<NAMESPACE>-zookeeper-1 Bound pvc-ea9a183b-a980-4ad8-93fd-6da39db88bff 20Gi RWO gp2 41h
data-<NAMESPACE>-zookeeper-2 Bound pvc-60aa744e-9ac3-4c15-9415-bfc4e5b7fdda 20Gi RWO gp2 41h
medusa-backups-cassandra-<ENV_NO>-dc1-default-sts-0 Bound pvc-a3ccad5e-6684-4000-9eaa-e5797b80025d 20Gi RWO gp2 6d17h
medusa-backups-cassandra-<ENV_NO>-dc1-default-sts-1 Bound pvc-8d414d5a-6f33-44ee-8f93-1837de74ab1f 20Gi RWO gp2 6d17h
medusa-backups-cassandra-<ENV_NO>-dc1-default-sts-2 Bound pvc-f15a113b-39a5-4808-ad5c-8ec9b80923c2 20Gi RWO gp2 41h
server-data-cassandra-<ENV_NO>-dc1-default-sts-0 Bound pvc-e09dd951-5d80-4351-9d6c-2170b5d4d243 20Gi RWO gp2 6d17h
server-data-cassandra-<ENV_NO>-dc1-default-sts-1 Bound pvc-c4963fd8-bd2a-4e9e-839d-6a05dddb08a6 20Gi RWO gp2 6d17h
server-data-cassandra-<ENV_NO>-dc1-default-sts-2 Bound pvc-d118e0e1-5e0f-40d3-a561-147fa0d2a90f 20Gi RWO gp2 41h
$ oc delete pvc data-<NAMESPACE>-zookeeper-2 -n <NAMESPACE>
$ oc delete pvc pvc-60aa744e-9ac3-4c15-9415-bfc4e5b7fdda -n <NAMESPACE>
$ oc delete pvc medusa-backups-cassandra-<ENV_NO>-dc1-default-sts-2 -n <NAMESPACE>
$ oc delete pvc pvc-f15a113b-39a5-4808-ad5c-8ec9b80923c2 -n <NAMESPACE>
$ oc delete pvc medusa-backups-cassandra-<ENV_NO>-dc1-default-sts-2 -n <NAMESPACE>
$ oc delete pvc server-data-cassandra-<ENV_NO>-dc1-default-sts-2 -n pvc-d118e0e1-5e0f-40d3-a561-147fa0d2a90f
Stuck in deleting CRD (oc get crd -oname | grep k8ssandra.io | xargs oc delete)
oc get crd | grep k8ssandra
oc patch crd/k8ssandraclusters.k8ssandra.io -p '{"metadata":{"finalizers":[]}}' --type=merge
Maintenance operations
Changing HAProxy configuration
HAProxy configuration is stored in a configmap named vid-haproxy-config in the deployment namespace.
# edit the configuration
oc edit configmap -n <NAMESPACE> vid-haproxy-config
# restart HAProxy
oc rollout restart -n <NAMESPACE> daemonset/vid-haproxy
Restarting a malfunctioning pod
Most pods have liveness probes in place that cause automatic restart in case of malfunction.
However, if a manual restart is necessary, please execute the following command:
oc rollout restart -n <NAMESPACE> deployment/<NAME_OF_DEPLOYMENT>