Skip to main content
Skip table of contents

Add a second DC to create a CDCR

1) Restore config

On OLD webapp node 1:

CODE
mkdir -p /tmp/veridium/haproxy
cp /etc/veridiumid/haproxy/client-ca.pem /tmp/veridium/haproxy/
cp /etc/veridiumid/haproxy/server.pem /tmp/veridium/haproxy/
cp /etc/veridiumid/haproxy/haproxy.cfg /tmp/veridium/haproxy/
mkdir -p /tmp/veridium/shib
cp /opt/veridiumid/shibboleth-idp/credentials/idp-signing.crt /tmp/veridium/shib/
cp /opt/veridiumid/shibboleth-idp/credentials/idp-signing.key /tmp/veridium/shib/
cp /opt/veridiumid/shibboleth-idp/credentials/sealer.jks /tmp/veridium/shib/
cp /opt/veridiumid/shibboleth-idp/credentials/idp-encryption.crt /tmp/veridium/shib/
cp /opt/veridiumid/shibboleth-idp/credentials/idp-encryption.key /tmp/veridium/shib/
cp 
tar -czvf /tmp/configs.tar.gz /tmp/veridium
ll /tmp/con*
-rw-r--r--. 1 root root 10685 Apr 18 09:33 /tmp/configs.tar.gz
--> copy file to new webapp nodes

Copy configs.tar.gz to all new WebApp nodes into the /tmp directory

On all new WebApp nodes:

CODE
sudo su -
ver_stop
cd /tmp
tar -xvf configs.tar.gz
\cp tmp/veridium/haproxy/client-ca.pem /opt/veridiumid/haproxy/conf/
\cp tmp/veridium/haproxy/server.pem /opt/veridiumid/haproxy/conf/
\cp tmp/veridium/haproxy/haproxy.cfg /opt/veridiumid/haproxy/conf/
\cp tmp/veridium/shib/sealer.jks /opt/veridiumid/shibboleth-idp/credentials/
\cp tmp/veridium/shib/idp-signing.crt /opt/veridiumid/shibboleth-idp/credentials/
\cp tmp/veridium/shib/idp-signing.key /opt/veridiumid/shibboleth-idp/credentials/
\cp tmp/veridium/shib/idp-encryption.crt /opt/veridiumid/shibboleth-idp/credentials/
\cp tmp/veridium/shib/idp-encryption.key /opt/veridiumid/shibboleth-idp/credentials/
## on each new WEBAPP run (put IP_OLD the IP from where the file was copied; put IP_NEW each IP in the new DC):
sed -i 's|IP_OLD|IP_NEW|g' /opt/veridiumid/haproxy/conf/haproxy.cfg

On old persistence node 1:

CODE
sudo su -
mkdir -p /tmp/veridium/cassandra
cp /opt/veridiumid/cassandra/conf/KeyStore.jks /tmp/veridium/cassandra
cp /opt/veridiumid/cassandra/conf/TrustStore.jks /tmp/veridium/cassandra
cp /opt/veridiumid/cassandra/conf/cqlsh_cassandra_cert.pem /tmp/veridium/cassandra
cp /opt/veridiumid/cassandra/conf/cqlsh_cassandra_key.pem /tmp/veridium/cassandra
mkdir -p /tmp/veridium/elastic
cp /opt/veridiumid/elasticsearch/config/elasticsearch.keystore /tmp/veridium/elastic/
cp /opt/veridiumid/elasticsearch/config/certs/KeyStore.p12 /tmp/veridium/elastic/
cp /opt/veridiumid/elasticsearch/config/certs/TrustStore.p12 /tmp/veridium/elastic/
tar -czvf /tmp/configs.tar.gz /tmp/veridium
ll /tmp/con*
-rw-r--r--. 1 root root 10685 Apr 18 09:33 /tmp/configs.tar.gz
--> copy file to all new persistence nodes

 

On all new persistence nodes -Restore config on all new persistence nodes:

CODE
ver_stop
cd /tmp
tar -xvf configs.tar.gz
\cp tmp/veridium/cassandra/KeyStore.jks /opt/veridiumid/cassandra/conf/
\cp tmp/veridium/cassandra/TrustStore.jks /opt/veridiumid/cassandra/conf/
\cp tmp/veridium/cassandra/cqlsh_cassandra_cert.pem /opt/veridiumid/cassandra/conf/
\cp tmp/veridium/cassandra/cqlsh_cassandra_key.pem /opt/veridiumid/cassandra/conf/
\cp tmp/veridium/elastic/elasticsearch.keystore /opt/veridiumid/elasticsearch/config/
\cp tmp/veridium/elastic/KeyStore.p12 /opt/veridiumid/elasticsearch/config/certs/
\cp tmp/veridium/elastic/TrustStore.p12 /opt/veridiumid/elasticsearch/config/certs/

 

2) Restore passwords

On old persistence node 1:

CODE
cat /opt/veridiumid/cassandra/conf/cassandra.yaml | grep keystore_password | tail -n
1| tr -d [:space:]| cut -d":" -f2
--> copy password

On all new persistence nodes:

CODE
## in DC2 (new Data Center)
sed -i "s|keystore_password.*|keystore_password: DC1_KEY_PASS|g" /opt/veridiumid/
cassandra/conf/cassandra.yaml
sed -i "s|truststore_password.*|truststore_password: DC1_TRUST_PASS|g" /opt/veridiumid/cassandra/conf/cassandra.yaml

 

  1. zookeeper migration

On old persistence node 1

CODE
## zookeeper - create backup 
/opt/veridiumid/migration/bin/migration.sh -d DC1backup

Copy directory "DC1backup" to new persistence node 1 - e.g. to /tmp/DC1Backup

Replace content of Zookeeper backup with new IP addresses. IP1_DC2 and IP1_DC1 ip’s.

CODE

NEW_IP="IP1_DC2";IP="IP1_DC1"; LIST_OF_FILE=( `grep -FlR "${IP}" *` ); for file in ${LIST_OF_FILE[@]}; do sed -i "s|${IP}|${NEW_IP}|g" ${file}; done
NEW_IP="IP2_DC2";IP="IP2_DC1"; LIST_OF_FILE=( `grep -FlR "${IP}" *` ); for file in ${LIST_OF_FILE[@]}; do sed -i "s|${IP}|${NEW_IP}|g" ${file}; done
NEW_IP="IP3_DC2";IP="IP3_DC1"; LIST_OF_FILE=( `grep -FlR "${IP}" *` ); for file in ${LIST_OF_FILE[@]}; do sed -i "s|${IP}|${NEW_IP}|g" ${file}; done
NEW_IP="IP4_DC2";IP="IP4_DC1"; LIST_OF_FILE=( `grep -FlR "${IP}" *` ); for file in ${LIST_OF_FILE[@]}; do sed -i "s|${IP}|${NEW_IP}|g" ${file}; done
NEW_IP="IP5_DC2";IP="IP5_DC1"; LIST_OF_FILE=( `grep -FlR "${IP}" *` ); for file in ${LIST_OF_FILE[@]}; do sed -i "s|${IP}|${NEW_IP}|g" ${file}; done

On new persistence node 1, update DC name in DC1backup:

CODE
vi /tmp/DC1backup/config.json
--> replace the value of the following three attributes from OLDDC to NEWDC.
- friendCertificateSuffixByDataCenter
- localDatacenter
- factors -> name

Start zookeeper on all new persistence nodes

CODE
systemctl start ver_zookeeper

On new persistence node 1, import updated DC1backup to Zookeeper

CODE
cd /tmp
/opt/veridiumid/migration/bin/migration.sh -u DC1backup

 

  1. cassandra migration

On old persistence node 1, update Network Topology Strategy (if not already set)

CODE
# once up, connect to cassandra shell
/opt/veridiumid/cassandra/bin/cqlsh --ssl --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc
# ACPT
## run inside cassandra shell
## old DC name in ACPT: 'dc1'
ALTER KEYSPACE system_auth WITH REPLICATION = {'class' : 'NetworkTopologyStrategy','dc1' : 3};
ALTER KEYSPACE system_traces WITH REPLICATION = {'class' : 'NetworkTopologyStrategy','dc1' : 3};
ALTER KEYSPACE system_distributed WITH REPLICATION = {'class' :'NetworkTopologyStrategy', 'dc1' : 3};
ALTER KEYSPACE veridium WITH REPLICATION = {'class' : 'NetworkTopologyStrategy','dc1' : 3};
quit

On old persistence node 1, run full repair to replicate data to all nodes

CODE
/opt/veridiumid/cassandra/bin/nodetool repair --full

On all new persistence nodes, stop cassandra

CODE
systemctl stop ver_cassandra

On all new persistence nodes, verify cassandra properties

CODE
less /opt/veridiumid/cassandra/conf/cassandra-rackdc.properties
dc=DC2
rack=rack1

On all new persistence nodes, update cassandra settings

CODE
vi /opt/veridiumid/cassandra/conf/cassandra-env.sh
# add this line at the very end:
JVM_OPTS="$JVM_OPTS -Dcassandra.ignore_dc=true"
vi /opt/veridiumid/cassandra/conf/cassandra.yaml
# cassandra recommendation: 4 seeds for 6 nodes cluster
- seeds: "IP1_DC1:7001,IP2_DC1:7001,IP1_DC2:7001,IP2_DC2:7001"
# add this line to the very end of the YAML file:
auto_bootstrap: true

On all new persistence nodes, run this on all three nodes - one after the other!

CODE
systemctl stop ver_cassandra
rm -fr /opt/veridiumid/cassandra/data/*
rm -fr /opt/veridiumid/cassandra/commitlog/*
# Now restart the service where dc2 will join the cluster
systemctl start ver_cassandra; tail -f /var/log/veridiumid/cassandra/system.log
#once done, run on first node:
nodetool status

On old persistence node 1:

CODE
#on old DC - extend replication to new nodes
/opt/veridiumid/cassandra/bin/cqlsh --ssl --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc
# ACPT:
ALTER KEYSPACE system_auth WITH REPLICATION = {'class' : 'NetworkTopologyStrategy','dc1' : 3, 'dc2' : 3};
ALTER KEYSPACE system_traces WITH REPLICATION = {'class' : 'NetworkTopologyStrategy','dc1' : 3, 'dc2' : 3};
ALTER KEYSPACE system_distributed WITH REPLICATION = {'class' :'NetworkTopologyStrategy', 'dc1' : 3, 'dc2' : 3};
ALTER KEYSPACE veridium WITH REPLICATION = {'class' : 'NetworkTopologyStrategy','dc1' : 3, 'dc2' : 3};
quit
#once done, run on first node:
nodetool status

On all new persistence nodes - one after the other:

CODE
/opt/veridiumid/cassandra/bin/nodetool rebuild -dc dc1
  1. Elasticsearch migration

On all new persistence nodes:

CODE
# extend cluster
service ver_elasticsearch stop
vi /opt/veridiumid/elasticsearch/config/elasticsearch.yml
--> Add IPs of other nodes in front of the existing values!
discovery.seed_hosts: ["IP1_DC1","IP2_DC1","IP3_DC1","IP1_DC2","IP2_DC2","IP3_DC3"]
cluster.initial_master_nodes: [ "IP1_DC1","IP2_DC1","IP3_DC1" ]
cluster.routing.allocation.awareness.force.zone.values: dc1, dc2

On all old persistence nodes:

CODE
vi /opt/veridiumid/elasticsearch/config/elasticsearch.yml
discovery.seed_hosts: ["IP1_DC1","IP2_DC1","IP3_DC1","IP1_DC2","IP2_DC2","IP3_DC3"]
cluster.routing.allocation.awareness.force.zone.values: dc1, dc2

On all new persistence nodes:

CODE
rm -rf /opt/veridiumid/elasticsearch/data/*
# restart elasticsearch on all new nodes - one after the other!!!
systemctl restart ver_elasticsearch
check_services
eops -l

On new persistence node 1:

CODE
eops -x=PUT -p=/veridium.*/_settings -d='{"index":{"number_of_replicas":3}}'
check_services
eops -l

On all old persistence node - one after the other :

CODE
systemctl restart ver_elasticsearch
check_services
  1. change secret

On old webapp node 1:

CODE
cat /etc/veridiumid/haproxy/haproxy.cfg | grep proxy-secret | rev | cut -d" " -f1 |
rev | uniq
→ copy old password

On all new webapp node:

CODE
cat /etc/veridiumid/haproxy/haproxy.cfg | grep proxy-secret | rev | cut -d" " -f1 |
rev | uniq
→ copy new password
# replace password on all new webapp nodes
sed -i "s|passwordDC2|passwordDC1|g" /opt/veridiumid/tomcat/conf/context.xml
## check also proxyName and proxyPort to be the same in the new datacenter as in the old datacenter, for both SHIBBOLETH and Catalina
/opt/veridiumid/tomcat/conf/server.xml
  1. start services on new webapps

CODE
systemctl start ver_websecadmin;tail -f /var/log/veridiumid/websecadmin/websecadmin.log
systemctl start ver_tomcat;tail -f /var/log/veridiumid/tomcat/catalina.out
ver_start
check_services
##also check on persistence if all services are up
ver_start
check_services
  1. renew Certificates in new websecadmin

CODE
-> Settings - Certificates - Service Credentials 
Press renew for System Services and also for OPA
## for OPA also run
#Click "Renew" → Certificate will be downloaded
#Copy certificate to both new webapp nodes to /tmp
#Apply new certificate on both new webapp nodes (one after the other)
/opt/veridiumid/opa/bin/config-opa.sh -c /tmp/opa-cert.zip
systemctl restart ver_opa
  1. Other configs

Change crontab on all persistence nodes

CODE
crontab -e
# Change maintenance task on all new persistence nodes:
## Hour should be 23
## Day should be
### persistence node 1: 5
### persistence node 2: 4
### persistence node 3: 3
0 1 * * 5 bash /opt/veridiumid/cassandra/conf/cassandra_maintenance.sh -c /opt/veridiumid/cassandra/conf/maintenance.conf
to
0 23 * * 5 bash /opt/veridiumid/cassandra/conf/cassandra_maintenance.sh -c /opt/veridiumid/cassandra/conf/maintenance.conf

It might be necessary to have different server.pem, because node names has changed. If this is the case, generate from local infrastructure a new certificate p12 and set it on the new machines.

CODE
Usage: ./convert_haproxy_cert.sh PATH_TO_PKCS_FILE
Example: 
bash /etc/veridiumid/scripts/convert_haproxy_cert.sh /home/veridiumid/veridium.p12
# the following error might appear
# error: 40CC49630D7F0000:error:0308010C:digital envelope
# routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global
# default library context, Algorithm (RC2-40-CBC : 0), Properties ()
cd/tmp
openssl pkcs12 -in keystore_vid.p12 -legacy -nodes -nocerts -out privateKey_enc.pem -passin pass:[KEYSTORE_PASSWORD]
openssl rsa -in privateKey_enc.pem -out /privateKeyFull.pem -passin pass: [KEYSTORE_PASSWORD]
openssl pkcs12 -in keystore_vid.p12 -legacy -nokeys -out publicCertFull.pem -passin pass:[KEYSTORE_PASSWORD]
cat privateKey_enc.pem > server.pem
cat publicCertFull.pem >> server.pem
cp server.pem /opt/veridiumid/haproxy/conf/server.pem
rm /tmp/server.pem
rm /tmp/keystore*

 

 

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.