Skip to main content
Skip table of contents

[SIX] Migrate data from onprem to OpenShift

1. Perform a new installation in OpenShift.

2. Configure the environment.

https://confluence.six-group.net/display/EALOC/New+environment+configuration

3. Zookeeper settings (use Websecadmin / Settings / Advanced)

  • config.json: copy the values of appNameIdentifier & certStore to the new installation;

  • ADService.json: copy the values of certificate & certificatePass to the new installation;

  • dmz.json: copy the values of cert & pwd to the new installation;

  • elasticsearch/lifecycle-policies.json & elasticsearch/component-templates.json: copy the entire content to the new installation.

  • freeradiusconfig.json: copy the values of tls_certificate_content & tls_private_key_content to the new installation;

  • ssp.json: copy the values of cert & pwd to the new installation;

  • shibboleth/veridium-integration.json: copy the values of certificate & password to the new installation;

4. Renew Certificates

  1. Certificates / Service Credentials / Others / Default certificates: Renew AD & GENERIC

  2. Certificates / Service Credentials / Others / OPA: Renew. Unpack the zip file, then:

    • put the contents of password.txt in configmap vid-opa-config under private_key_passphrase.

    • put opa_friend.cer and opa_friend.key in secret vid-opa-certs.

5. Copy the following files from on-prem:

On the remote machine:

CODE
cp /vid-app/1.0.0/haproxy/conf/client-ca.pem /tmp
cp /vid-app/1.0.0/tomcat/certs/truststore_root.ks /tmp
chmod o+r /tmp/client-ca.pem /tmp/truststore_root.ks

Next, download them on your local machine.

CODE
scp <USER>@<IP>:/tmp/client-ca.pem .
scp <USER>@<IP>:/tmp/truststore_root.ks .

6. client-ca.pem

  • Upload client-ca.pem to the vid-haproxy-certs secret:

CODE
oc get secret vid-haproxy-certs -o yaml | sed "s,$(oc get secret vid-haproxy-certs -ojsonpath='{.data.client-ca\.pem}'),$(base64 -w 0 client-ca.pem),g" | oc apply -f -
  • Add client-ca.pem to truststore: Settings / Certificates / Truststores / Add Truststore.

7. Recreate keystore

Connect to the vid-maintenance pod:

CODE
oc exec -it <MAINTENANCE_POD_NAME> -- bash

Then execute the following:

CODE
# download configuration from zookeeper
/bin/bash /opt/veridiumid/migration/bin/migration.sh -d /tmp
cd /tmp
bash /scripts/generate-keystore.sh
mv keystore.p12 keystore_root.p12
# Upload the file to the `vid-tomcat-certs` secret.
kubectl get secret vid-tomcat-certs -o yaml | sed "s,$(kubectl get secret vid-tomcat-certs -ojsonpath='{.data.keystore_root\.p12}'),$(base64 -w 0 keystore_root.p12),g" | kubectl apply -f -

8. Upload truststore_root.ks to the vid-websecadmin-certs and vid-tomcat-certs secrets:

CODE
oc get secret vid-websecadmin-certs -o yaml | sed "s,$(oc get secret vid-websecadmin-certs -ojsonpath='{.data.truststore_root\.ks}'),$(base64 -w 0 truststore_root.ks),g" | oc apply -f -

oc get secret vid-tomcat-certs -o yaml | sed "s,$(oc get secret vid-tomcat-certs -ojsonpath='{.data.truststore_root\.ks}'),$(base64 -w 0 truststore_root.ks),g" | oc apply -f -

9. Truststore password

Retrieve the truststore password from the config.json file in the on-prem installation, specifically from the certStore.signingKeystore.pwd field. Subsequently, modify the truststorePass and keystorePass values in the server-TOMCAT.xml file within the vid-adservice-tomcat-server-xml-config and vid-websec-tomcat-server-xml-config configmaps.

10. Migrate Cassandra data

Create a backup from on-prem cluster:

CODE
nodetool clearsnapshot --all
nodetool repair veridium 
nodetool compact veridium 
bash /opt/veridiumid/backup/cassandra/cassandra_backup.sh -c=/opt/veridiumid/backup/cassandra/cassandra_backup.conf
cd /opt/veridiumid/backup/cassandra
tar -czf /tmp/backup_cassandra.tar.gz <MOST_RECENT_FOLDER>

Copy the backup to local machine:

CODE
scp <USER>@<IP>:/tmp/backup_cassandra.tar.gz .

Copy the backup to any Cassandra pods in the OpenShift cluster:

CODE
oc cp backup_cassandra.tar.gz <CASSANDRA_POD_NAME>:/tmp/

Connect the that Cassandra pod:

CODE
oc exec -it <CASSANDRA_POD_NAME> -- bash

Restore data from backup:

CODE
# Download script from https://veridium-repo.veridium-dev.com/repository/helm-releases/veridiumid-containers/3.5.2/scripts/restore-cassandra.sh
bash /scripts/restore-cassandra.sh 2>&1 | tee /tmp/output.txt

11. Migrate Elasticsearch data

Create a backup from on-prem cluster:

CODE
bash /opt/veridiumid/migration/bin/elk_ops.sh --backup --page-size=1000 --mode=DATA --request-timeout=5 --connect-timeout=1 --parallel-tasks=2 --date-to=`date -d '+1 day' +%Y-%m-%d` --dir=/tmp/exportElastic
cd /tmp/exportElastic && tar -czf /tmp/backup_es.tar.gz .

Copy the backup to local machine:

CODE
scp <USER>@<IP>:/tmp/backup_es.tar.gz .

Copy the backup to the maintenance pod

CODE
oc cp backup_es.tar.gz <MAINTENANCE_POD_NAME>:/tmp/

Connect the the maintenance pod:

CODE
oc exec -it <MAINTENANCE_POD_NAME> -- bash

Restore data from backup:

CODE
bash /scripts/restore-elasticsearch.sh /tmp/backup_es.tar.gz

12. Restart VeridiumID pods

CODE
oc rollout restart deployment vid-haproxy vid-adservice vid-opa vid-websec vid-websecadmin vid-maintenance vid-ssp
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.