Skip to main content
Skip table of contents

Cross DataCenter Replication using the same CA certificate and script (VeridiumID 3.6.0+)

This article will provide a step by step procedure to configuring Cross DataCenter Replication using the same CA certificate with the use of the CDCR script.

1) Script usage

BASH
/etc/veridiumid/scripts/veridiumid_cdcr.sh
Usage: ./veridiumid_cdcr.sh <args>
                            -g          -> Generate archive (must be used on a Webapp node on the primary datacenter)
                            -w          -> Configure Webapp node on secondary datacenter
                            -c          -> Configure Cassandra node
                            -e          -> Configure ElasticSearch node
                            -f          -> First part of Cassandra configuration
                            -s          -> Secondary datacenter
                            -z          -> Upload modified Zookeeper configuraion (can be done just once in the secondary datacenter).
                            -a PATH     -> The path to the CDCR archive created using the '-g' argument.

2) Generate the archive

In order to configure another datacenter to work with the same CA certificate, an archive containing all the mandatory configs must be generated.

The archive will contain the following:

  • HaProxy configuration file and domain certificate

  • FreeRadius configuration files and certificates

  • A file containing the IP addresses of Webapp and Persistence nodes from both datacenters and the datacenter names

  • Zookeeper configuration files from the primary datacenter

To generate the CDCR archive, run the following command as root user on one of the Webapp nodes from the primary datacenter:

BASH
./veridiumid_cdcr.sh -g

After the operation has finished the following archive will be present in the current directory: DC1.tar.gz

Make sure to copy the archive generated to all Persistence nodes in both datacenters and to all webapp nodes in the secondary datacenter.

During the next steps of the procedure we will use ${ARCH_PATH} as the full path to the CDCR archive.

3) Cassandra CDCR configuration

The Cassandra configuration will be split into two parts:

  • the first part:

    • On the primary datacenter the script will modify existing keyspaces

    • On the second datacenter the script will modify the configuration and generate certificates from the same CA as the first datacenter

  • the second part:

    • On the primary datacenter the script will update Cassandra’s configuration and the keyspaces to take account of the new datacenter

    • On the second datacenter the script will trigger the rebuild of the database using the first datacenter

3.1) First part

Connect to one persistence node from the primary datacenter and run the following command as root:

BASH
./veridiumid_cdcr.sh -c -f -a ${ARCH_PATH}

Connect to all persistence nodes from the second datacenter and run the following command as root, one node after the other:

BASH
./veridiumid_cdcr.sh -c -s -f -a ${ARCH_PATH}

3.2) Second part

Connect to one persistence node from the primary datacenter and run the following command as root:

BASH
./veridiumid_cdcr.sh -c -a ${ARCH_PATH}

Connect to all persistence nodes from the second datacenter and run the following command as root, one node after the other:

BASH
./veridiumid_cdcr.sh -c -s -a ${ARCH_PATH}

3.3) Check the status

In order to check the status of the Cassandra datacenters run the following command on any persistence node:

BASH
/opt/veridiumid/cassandra/bin/nodetool status

The result should show both datacenters, as seen in the result below:

BASH
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens  Owns (effective)  Host ID                               Rack 
UN  10.2.3.204  47.6 MiB   8       100.0%            44a99006-32e3-4479-876a-0a25488b2157  rack1

Datacenter: dc2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load       Tokens  Owns (effective)  Host ID                               Rack 
UN  10.2.3.85   46.36 MiB  8       100.0%            eea3d57f-81f2-47b0-bd2b-337304d34e00  rack1

4) ElasticSearch configuration

During this step, the script will create a large cluster from the ElasticSearch nodes in both datacenters.

Connect to all persistence nodes in the secondary datacenter and run the following command as root, one by one :

BASH
./veridiumid_cdcr.sh -e -s -a ${ARCH_PATH}

Connect to all persistence nodes in the primary datacenter and run the following command as root, one by one :

BASH
./veridiumid_cdcr.sh -e -a ${ARCH_PATH}

To check the status of the cluster after finishing the above steps run the following command on any persistence node:

BASH
/opt/veridiumid/elasticsearch/bin/elasticsearch_ops.sh -x=GET -p=/_cat/nodes?v

The result should be similar to the following:

BASH
ip         heap.percent ram.percent cpu load_1m load_5m load_15m node.role   master name
10.2.3.85            44          92   3    0.02    0.08     0.10 cdfhilmrstw -      dc2-node-1
10.2.3.204           49          91   2    0.16    0.10     0.09 cdfhilmrstw *      dc1-node-1

5) Zookeeper configuration sync

In order to sync the Zookeeper configuration between the two datacenters the following command must be run on a single webapp node in the secondary datacenter as root:

BASH
./veridiumid_cdcr.sh -z -a ${ARCH_PATH}

6) Webapp node configuration

In order to sync the configuration of webapp nodes in the secondary datacenter, connect to all webapp nodes and run the following command, one by one, as root:

BASH
./veridiumid_cdcr.sh -w -a ${ARCH_PATH}

To check the status of all services deployed on a node please use the following command:

/etc/veridiumid/scripts/check_services.sh

Or the command’s allias:

check_services

7) Post configuration steps

7.1) Disable data-retention service on the secondary datacenter

It is recommended that the data retention process should only run on the main Data Center (DC1).

Execute the following commands on all the persistence nodes in the secondary Data Center (DC2) as root:

BASH
systemctl disable ver_data_retention; service ver_data_retention stop

7.2) Regenerate the existing certificates

It is recommended to reissue all the certificates used by VeridiumID once the CDCR is completed.

  • Navigate to VeridiumID Web Admin Interface → Settings → Certificates → Service Credentials

    • Under “System Services“ Tab, renew the certificates for Active Directory Integration, DMZ, Shibboleth, Self Service Portal, Radius Server)

    • Under “Others” Tab, renew the OPA certificate. This certificate is Data Center specific and will download a .zip archive on renewal. The OPA certificate needs to be also renewed on each of the respective persistence nodes. This can be achieved by using the command below, after the zip archive is uploaded to the node. Replace </path/to/archive/> in the example below with the actual path of the archive.

BASH
adminFQDN=$(grep "url:" /opt/veridiumid/opa/conf/opa.yaml | awk -F'/' '{print $3}')
/opt/veridiumid/opa/bin/config-opa.sh -c </path/to/archive/> -a $adminFQDN
  • Navigate to VeridiumID Web Admin Interface → Settings → Certificates → Validity Dashboard

    • Remove the previously used certificates (ad, dmz, shibboleth, ssp, radius) by going to Devices and search for FRIEND device.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.