Cross DataCenter Replication using the same CA certificate - CDCR
This document will describe the procedure to enable CDCR between VeridiumID deployments using the same Certificate Authority.
Note: In this procedure we will refer to the first datacenter (the one containing all the data) as DC1 and the second datacenter (the one that will be synced) as DC2
PLEASE DISABLE ZOOKEEPER encryption on each DC before starting the synchronization.
To disable Zookeeper, access the Veridium Manager Console, go to Settings → Advanced. On the top right of the page you will find a button labeled “Disable storage protection”.
If the Zookeeper encryption is already disabled, the button is labeled “Enable storage protection” and the message “Storage protection is disabled” will be displayed.
1. Please check if all the servers have ntp in sync. Time desynchronization it will not allow to create a CDCR cluster.
timedatectl status
date
### check ntp configuration and see what servers are configured
vim /etc/ntpd.conf
service ntpd stop; ntdp -gq; service ntpd start; systemctl enable ntpd
## how to test a conectivity to a server
1. Copy HaProxy truststore from DC1 to DC2
Access a Webapp node in DC1 and copy the following certificate: /etc/veridiumid/haproxy/client-ca.pem.
Upload the certificate to all Webapp nodes in DC2 and replace the existing one.
2. Copy Cassandra certificates from DC1 to DC2
Access a persistence node in DC1 and copy the following certificate files to the same location in DC2:
Keystore: /opt/veridiumid/cassandra/conf/KeyStore.jks
Keystore password can be obtained by using the following command:
- BASH
cat /opt/veridiumid/cassandra/conf/cassandra.yaml | grep keystore_password | tail -n 1| tr -d [:space:]| cut -d":" -f2
Truststore: /opt/veridiumid/cassandra/conf/TrustStore.jks
Truststore password can be obtained by using the following command:
- BASH
cat /opt/veridiumid/cassandra/conf/cassandra.yaml | grep truststore_password | tail -n 1| tr -d [:space:]| cut -d":" -f2
CQLSH public certificate: /opt/veridiumid/cassandra/conf/cqlsh_cassandra_cert.pem
CQLSH private key: /opt/veridiumid/cassandra/conf/cqlsh_cassandra_key.pem
Upload certificates to the same path in all DC2 persistence nodes and modify the passwords in cassandra.yaml by using the following commands:
sed -i "s|keystore_password.*|keystore_password: DC1_KEY_PASS|g" /opt/veridiumid/cassandra/conf/cassandra.yaml
sed -i "s|truststore_password.*|truststore_password: DC1_TRUST_PASS|g" /opt/veridiumid/cassandra/conf/cassandra.yaml
Where:
DC1_KEY_PASS: is the password of the keystore obtained from DC1
DC1_TRUST_PASS: is the password of the truststore obtained from DC1
After changing the certificates and the passwords restart Cassandra service on all DC2 persistence nodes using the following command as root:
service ver_cassandra restart
3. Configure Cassandra Network Topology Strategy in DC1
Connect to a persistence server in DC1 and use the following commands:
/opt/veridiumid/cassandra/bin/cqlsh --ssl --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc
ALTER KEYSPACE system_auth WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 3};
ALTER KEYSPACE system_traces WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 3};
ALTER KEYSPACE system_distributed WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 3};
ALTER KEYSPACE veridium WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 3};
quit
Where:
‘dc1’ will be replaced with the name of the first Data Center in lowercase
“3” will be replaced with the number of persistence nodes in each Data Center
These settings will now be replicated within the cluster. Please use the following command to force the synchronisation:
/opt/veridiumid/cassandra/bin/nodetool repair --full
4. Configure Cassandra in DC2
Stop all VeridiumID services on all nodes in DC2 by running the following command as root:
ver_stop
On all DC2 persistence nodes make sure that /opt/veridiumid/cassandra/conf/cassandra-rackdc.properties has a different value for datacenter name than then one in DC1. For example:
dc=dc2
rack=rack1
On all DC2 persistence nodes add the following line at the end of /opt/veridiumid/cassandra/conf/cassandra-env.sh:
JVM_OPTS="$JVM_OPTS -Dcassandra.ignore_dc=true"
On all persistence nodes edit /opt/veridiumid/cassandra/conf/cassandra.yaml and change/add these entries. The seed nodes will be the first two IP addresses of the current datacenter and the first two IP addresses of the other datacenter. In our case the following will be configured:
On DC1: IP1_DC1:PORT, IP2_DC1:PORT, IP1_DC2:PORT, IP2_DC2:PORT
On DC2: IP1_DC2:PORT, IP2_DC2:PORT, IP1_DC1:PORT, IP2_DC2:PORT
# Add the IP of the 1st persistence server from dc1 to the seeds
seed_provider:
# Addresses of hosts that are deemed contact points.
# Cassandra nodes use this list of hosts to find each other and learn
# the topology of the ring. You must change this if you are running
# multiple nodes!
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- seeds: "10.20.0.5:7001,10.20.0.6:7001,10.40.0.5:7001,10.40.0.6:7001"
# If this line doesn't exist at it add the end of the file
auto_bootstrap: true
On all DC2 persistence nodes restart Cassandra service and remove current data using the following commands:
systemctl stop ver_cassandra
rm -fr /opt/veridiumid/cassandra/data/*
rm -fr /opt/veridiumid/cassandra/commitlog/*
# Now restart the service where dc2 will join the cluster
systemctl start ver_cassandra;tail -f /var/log/veridiumid/cassandra/system.log
5. Configure Cassandra in DC1
Run the following commands in DC1 on a single persistence node:
/opt/veridiumid/cassandra/bin/cqlsh --ssl --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc
ALTER KEYSPACE system_auth WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 3, 'dc2' : 3};
ALTER KEYSPACE system_traces WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 3, 'dc2' : 3};
ALTER KEYSPACE system_distributed WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 3, 'dc2' : 3};
ALTER KEYSPACE veridium WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'dc1' : 3, 'dc2' : 3};
quit
6. Rebuild DC2 datacenter
On each persistence node in DC2 run the following command to rebuild the data. Start with the first node and then wait for the process to be completed before moving to the next.
/opt/veridiumid/cassandra/bin/nodetool rebuild -dc dc1
# You can check the status of the database using this command
/opt/veridiumid/cassandra/bin/nodetool status
7. Configure Elasticsearch for CDCR
7.1) Stop the cluster in DC2
Connect to all persistence nodes in DC2 and stop the ElasticSearch service by running the following command as root:
service ver_elasticsearch stop
7.2) Modify the ElasticSearch YAML file
Connect to all persistence nodes in DC2 and add the following configurations to the /opt/veridiumid/elasticsearch/config/elasticsearch.yml file:
discovery.seed_hosts: [ NODES_FROM_DC1, NODES_FROM_DC2 ]
cluster.initial_master_nodes: [ NODES_FROM_DC1 ]
cluster.routing.allocation.awareness.force.zone.values: dc1, dc2
Where:
- NODES_FROM_DC1 is the list of IP addresses of nodes in DC1, for example: "10.0.1.1", "10.0.1.2", "10.0.1.3"
- NODES_FROM_DC2 is the list of IP addresses of nodes in DC2, for example: "10.0.2.1", "10.0.2.2", "10.0.2.3"
Connect to all persistence nodes in DC1 and modify the following in the /opt/veridiumid/elasticsearch/config/elasticsearch.yml file:
discovery.seed_hosts: [ NODES_FROM_DC1, NODES_FROM_DC2 ]
cluster.routing.allocation.awareness.force.zone.values: dc1, dc2
Where:
- NODES_FROM_DC1 is the list of IP addresses of nodes in DC1, for example: "10.0.1.1", "10.0.1.2", "10.0.1.3"
- NODES_FROM_DC2 is the list of IP addresses of nodes in DC2, for example: "10.0.2.1", "10.0.2.2", "10.0.2.3"
7.3) Copy certificates and passwords from the first datacenter
From one persistence node in DC1 the following files must be copied to all persistence nodes in DC2:
/opt/veridiumid/elasticsearch/config/elasticsearch.keystore file
/opt/veridiumid/elasticsearch/config/certs directory
After copying the to the same path on all nodes in DC2, make sure to have the correct certificate names in the elasticsearch.yml file:
# Check the certificate names
ls -l /opt/veridiumid/elasticsearch/config/certs
# Example output
total 12
-rw-rw---- 1 veridiumid veridiumid 6615 mar 9 16:30 KeyStore.p12
-rw-rw---- 1 veridiumid veridiumid 1938 mar 9 16:31 TrustStore.p12
# Open the elasticsearch.yml file and make sure to have the correct names listed under the following values
xpack.security.http.ssl:
enabled: true
keystore.path: certs/KeyStore.p12
keystore.type: PKCS12
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/KeyStore.p12
keystore.type: PKCS12
truststore.path: certs/TrustStore.p12
truststore.type: PKCS12
7.4) Delete current data on DC2
Connect to all persistence nodes on DC2 and run the following command as root to remove the existing data of the ElasticSearch cluster:
rm -rf /opt/veridiumid/elasticsearch/data/*
7.5) Start ElasticSearch on the second datacenter
On all persistence nodes in DC2 start ElasticSearch by using the following commands as root:
service ver_elasticsearch start
7.6) ReStart ElasticSearch on the first datacenter
On all persistence nodes in DC1 reStart ElasticSearch by using the following commands as root:
service ver_elasticsearch restart
8. Restart persistence services in DC2
On each persistence node in DC2 run the following commands to restart services. Commands must be run as root:
service ver_zookeeper start
9. Sync Zookeeper configuration between nodes:
Connect to a Webapp Node in DC1 and download the zookeeper configuration using the following command:
/opt/veridiumid/migration/bin/migration.sh -d DC1
tar zcvf dc1.tar.gz DC1
Copy the /home/veridiumid/DC1 directory and upload it to a Webapp node in DC2.
On the node where the DC1 configuration has been uploaded in DC2, run the following command to download the zookeeper configuration just for backup purposes:
/opt/veridiumid/migration/bin/migration.sh -d DC2
Sync persistence IP addresses from DC1 with the ones in DC2:
cd DC1
NEW_IP="IP_WEB1_DC2";IP="IP_WEB1_DC1"; LIST_OF_FILE=( `grep -FlR "${IP}" *` ); for file in ${LIST_OF_FILE[@]}; do sed -i "s|${IP}|${NEW_IP}|g" ${file}; done
NEW_IP="IP_WEB2_DC2";IP="IP_WEB2_DC1"; LIST_OF_FILE=( `grep -FlR "${IP}" *` ); for file in ${LIST_OF_FILE[@]}; do sed -i "s|${IP}|${NEW_IP}|g" ${file}; done
NEW_IP="IP_PER1_DC2";IP="IP_PER1_DC1"; LIST_OF_FILE=( `grep -FlR "${IP}" *` ); for file in ${LIST_OF_FILE[@]}; do sed -i "s|${IP}|${NEW_IP}|g" ${file}; done
NEW_IP="IP_PER2_DC2";IP="IP_PER2_DC1"; LIST_OF_FILE=( `grep -FlR "${IP}" *` ); for file in ${LIST_OF_FILE[@]}; do sed -i "s|${IP}|${NEW_IP}|g" ${file}; done
NEW_IP="IP_PER3_DC2";IP="IP_PER3_DC1"; LIST_OF_FILE=( `grep -FlR "${IP}" *` ); for file in ${LIST_OF_FILE[@]}; do sed -i "s|${IP}|${NEW_IP}|g" ${file}; done
##edit config.json and set in 3 places the name of the second datacenter instead of old datacenter
Where:
IP_WEB1_DC1 is the IP address of the first webapp node in DC1
IP_PER1_DC1 is the IP address of the first persistence node in DC1
IP_WEB1_DC2 is the IP address of the first webapp node in DC2
IP_PER1_DC2 is the IP address of the first persistence node in DC2
Upload the new configuration to Zookeeper in DC2:
/opt/veridiumid/migration/bin/migration.sh -u DC1
10. Change Proxy secret in DC2
Connect to a Webapp node in DC1 and obtain the proxy secret value using the following command:
cat /etc/veridiumid/haproxy/haproxy.cfg | grep proxy-secret | rev | cut -d" " -f1 | rev | uniq
Connect to a Webapp node in DC2 and obtain the proxy secret value using the following command:
cat /etc/veridiumid/haproxy/haproxy.cfg | grep proxy-secret | rev | cut -d" " -f1 | rev | uniq
Run the following commands on all Webapp nodes in DC2 as root:
sed -i "s|passwordDC2|passwordDC1|g" /etc/veridiumid/haproxy/haproxy.cfg
sed -i "s|passwordDC2|passwordDC1|g" /opt/veridiumid/tomcat/conf/context.xml
11. Restart services
Run the following commands as root:
# Persistence nodes
systemctl start ver_data_retention
# Webapp nodes
ver_start
## or individual process
systemctl start ver_websecadmin;tail -f /var/log/veridiumid/websecadmin/websecadmin.log
systemctl start ver_tomcat;tail -f /var/log/veridiumid/tomcat/catalina.out
systemctl start ver_notifications
/opt/veridiumid/statistics/bin/statistics-manager.sh start
systemctl start ver_fido
systemctl start ver_selfservice
systemctl start ver_haproxy
systemctl start ver_freeradius
12. Go in websecadmin in DC2, and modify the variable with the second DC name (as it was copied from DC1) - friendCertificateSuffixByDataCenter
13) Zookeeper configuration changes for Elasticsearch
Connect to the Admin Dashboard in DC1 and navigate to Settings → Advanced → elasticsearch.json and add all hosts (from both DC1 and DC2, separated by a comma) to the list of hosts.
Copy the content of the JSON file and, after saving the file, connect to the Admin Dashboard in DC2 and modify the elasticsearch.json with the values from DC1.
14) Change number of replicas of current indices
The following command can be run from any persistence node (recommended would be one in DC1):
eops -x=PUT -p=/veridium.*/_settings -d='{"index":{"number_of_replicas":3}}'
15) Check status of the cluster
The following commands can be run from any or all persistence nodes:
## run check_services and see that all 6 elasticnodes are in a cluster.
/etc/veridiumid/services/check_services.sh
16. Other recommendations:
16.1 Setting up the data retention process to run only on one Data Center
It is recommended that the data retention process should only run on the main Data Center (DC1).
Execute the following commands on all the persistence nodes in the secondary Data Center (DC2).
systemctl disable ver_data_retention; service ver_data_retention stop
16.2 Setting up maintenance jobs for the persistence nodes
On the persistence nodes, in all of the data centers it is recommended to trigger the maintenance jobs on different intervals.
The same configuration must be set on all the persistence nodes in all the Data Centers.
In the example below is presented the recommended setup for the maintenance jobs for a 3 persistence node deployment.
## node1
0 4 * * 6 bash /opt/veridiumid/backup/cassandra/cassandra_backup.sh -c=/opt/veridiumid/backup/cassandra/cassandra_backup.conf
0 1 * * 5 bash /opt/veridiumid/cassandra/conf/cassandra_maintenance.sh -c /opt/veridiumid/cassandra/conf/maintenance.conf
0 1 * * 6 bash /opt/veridiumid/cassandra/conf/cassandra_maintenance.sh -c /opt/veridiumid/cassandra/conf/maintenance.conf -k
## node2
0 4 * * 5 bash /opt/veridiumid/backup/cassandra/cassandra_backup.sh -c=/opt/veridiumid/backup/cassandra/cassandra_backup.conf
0 1 * * 4 bash /opt/veridiumid/cassandra/conf/cassandra_maintenance.sh -c /opt/veridiumid/cassandra/conf/maintenance.conf
0 1 * * 5 bash /opt/veridiumid/cassandra/conf/cassandra_maintenance.sh -c /opt/veridiumid/cassandra/conf/maintenance.conf -k
## node3
0 4 * * 7 bash /opt/veridiumid/backup/cassandra/cassandra_backup.sh -c=/opt/veridiumid/backup/cassandra/cassandra_backup.conf
0 1 * * 6 bash /opt/veridiumid/cassandra/conf/cassandra_maintenance.sh -c /opt/veridiumid/cassandra/conf/maintenance.conf
0 1 * * 7 bash /opt/veridiumid/cassandra/conf/cassandra_maintenance.sh -c /opt/veridiumid/cassandra/conf/maintenance.conf -k
16.3 Verify the certificate parameters
Navigate to VeridiumID Web Admin Interface → Settings → Advanced → config.json
In the json document, search for validityDays
.
The same values should be present on all of the persistence nodes, in both Data Centers. Below you will find the default values for these parameters.
"validityDays": {
"default": 3650,
"other": 3650,
"desktop": 365,
"phone": 3650,
"friend": 3650,
"admin": 3650
}
16.4 Verify the data-retention policy
Navigate to VeridiumID Web Admin Interface → Settings → Advanced → config.json
In the json document, search for
dataType
.Check that for each of the dataType found, the “hot” parameter is set to 365 (one year).
In the json document, search for
archivingPath
.Check that the configured path exists on the persistence nodes.
16.5 Regenerating the existing certificates
It is recommended to reissue all the certificates used by VeridiumID once the CDCR is completed.
Navigate to VeridiumID Web Admin Interface → Settings → Certificates → Service Credentials
Under “System Services“ Tab, renew the certificates for Active Directory Integration, DMZ, Shibboleth, Self Service Portal, Radius Server)
Under “Others” Tab, renew the OPA certificate. This certificate is Data Center specific and will download a .zip archive on renewal. The OPA certificate needs to be also renewed on each of the respective persistence nodes. This can be achieved by using the command below, after the zip archive is uploaded to the node. Replace
</path/to/archive/>
in the example below with the actual path of the archive.
adminFQDN=$(grep "url:" /opt/veridiumid/opa/conf/opa.yaml | awk -F'/' '{print $3}')
/opt/veridiumid/opa/bin/config-opa.sh -c </path/to/archive/> -a $adminFQDN
Navigate to VeridiumID Web Admin Interface → Settings → Certificates → Validity Dashboard
Remove the previously used certificates (ad, dmz, shibboleth, ssp, radius) by going to Devices and search for FRIEND device.
Useful scripts:
#DC1 webapp
mkdir -p /tmp/veridium/haproxy
cp /etc/veridiumid/haproxy/client-ca.pem /tmp/veridium/haproxy/
cp /etc/veridiumid/haproxy/server.pem /tmp/veridium/haproxy/
mkdir -p /tmp/veridium/shib
cp /opt/veridiumid/shibboleth-idp/credentials/idp-signing.crt /tmp/veridium/shib/
cp /opt/veridiumid/shibboleth-idp/credentials/idp-signing.key /tmp/veridium/shib/
cp /opt/veridiumid/shibboleth-idp/credentials/sealer.jks /tmp/veridium/shib/
cp /opt/veridiumid/shibboleth-idp/credentials/idp-encryption.crt /tmp/veridium/shib/
cp /opt/veridiumid/shibboleth-idp/credentials/idp-encryption.key /tmp/veridium/shib/
tar -czvf /tmp/configs.tar.gz /tmp/veridium
## copy the file to webapps in DC2
tar -xvf configs.tar.gz
\cp tmp/veridium/haproxy/client-ca.pem /opt/veridiumid/haproxy/conf/
\cp tmp/veridium/haproxy/server.pem /opt/veridiumid/haproxy/conf/
\cp tmp/veridium/shib/sealer.jks /opt/veridiumid/shibboleth-idp/credentials/
\cp tmp/veridium/shib/idp-signing.crt /opt/veridiumid/shibboleth-idp/credentials/
\cp tmp/veridium/shib/idp-signing.key /opt/veridiumid/shibboleth-idp/credentials/
\cp tmp/veridium/shib/idp-encryption.crt /opt/veridiumid/shibboleth-idp/credentials/
\cp tmp/veridium/shib/idp-encryption.key /opt/veridiumid/shibboleth-idp/credentials/
#DC1: persistence
mkdir -p /tmp/veridium/cassandra
cp /opt/veridiumid/cassandra/conf/KeyStore.jks /tmp/veridium/cassandra
cp /opt/veridiumid/cassandra/conf/TrustStore.jks /tmp/veridium/cassandra
cp /opt/veridiumid/cassandra/conf/cqlsh_cassandra_cert.pem /tmp/veridium/cassandra
cp /opt/veridiumid/cassandra/conf/cqlsh_cassandra_key.pem /tmp/veridium/cassandra
mkdir -p /tmp/veridium/elastic
cp /opt/veridiumid/elasticsearch/config/elasticsearch.keystore /tmp/veridium/elastic/
cp /opt/veridiumid/elasticsearch/config/certs/KeyStore.p12 /tmp/veridium/elastic/
cp /opt/veridiumid/elasticsearch/config/certs/TrustStore.p12 /tmp/veridium/elastic/
#mkdir -p /tmp/veridium/kafka
#cp /opt/veridiumid/kafka/config/certs/KeyStore.jks /tmp/veridium/kafka
#cp /opt/veridiumid/kafka/config/certs/TrustStore.jks /tmp/veridium/kafka
tar -czvf /tmp/configs.tar.gz /tmp/veridium
## on persistence
cd /tmp
tar -xvf /tmp/configs.tar.gz
\cp tmp/veridium/cassandra/KeyStore.jks /opt/veridiumid/cassandra/conf/
\cp tmp/veridium/cassandra/TrustStore.jks /opt/veridiumid/cassandra/conf/
\cp tmp/veridium/cassandra/cqlsh_cassandra_cert.pem /opt/veridiumid/cassandra/conf/
\cp tmp/veridium/cassandra/cqlsh_cassandra_key.pem /opt/veridiumid/cassandra/conf/
## in DC1:
cat /opt/veridiumid/cassandra/conf/cassandra.yaml | grep keystore_password | tail -n 1| tr -d [:space:]| cut -d":" -f2
## in DC2
sed -i "s|keystore_password.*|keystore_password: DC1_KEY_PASS|g" /opt/veridiumid/cassandra/conf/cassandra.yaml
sed -i "s|truststore_password.*|truststore_password: DC1_TRUST_PASS|g" /opt/veridiumid/cassandra/conf/cassandra.yaml
#DC2
\cp tmp/veridium/elastic/elasticsearch.keystore /opt/veridiumid/elasticsearch/config/
\cp tmp/veridium/elastic/KeyStore.p12 /opt/veridiumid/elasticsearch/config/certs/
\cp tmp/veridium/elastic/TrustStore.p12 /opt/veridiumid/elasticsearch/config/certs/
4.2 kafka:
#cp /tmp/tmp/veridum/kafka/KeyStore.jks /opt/veridiumid/kafka/config/certs/
#cp /tmp/tmp/veridium/kafka/TrustStore.jks /opt/veridiumid/kafka/config/certs/
## run in DC1:
#cat /opt/veridiumid/kafka/config/server.properties | grep keystore.password | cut -d"=" -f2
#cat /opt/veridiumid/kafka/config/server.properties | grep truststore.password | cut -d"=" -f2
## run in DC2:
#sed -i "s|keystore.password.*|keystore.password=DC1_KEY_PASS|g" /opt/veridiumid/kafka/config/server.properties
#sed -i "s|key.password.*|key.password=DC1_KEY_PASS|g" /opt/veridiumid/kafka/config/server.properties
#sed -i "s|truststore.password.*|truststore.password=DC1_TRUST_PASS|g" /opt/veridiumid/kafka/config/server.properties
## zookeeper
/opt/veridiumid/migration/bin/migration.sh -d DC1backup
/opt/veridiumid/migration/bin/migration.sh -d DC2backup
/opt/veridiumid/migration/bin/migration.sh -u DC1
## shibboleth
cp /opt/veridiumid/shibboleth-idp/credentials/idp-signing.crt /tmp/shib/
cp /opt/veridiumid/shibboleth-idp/credentials/sealer.jks /tmp/shib/
cp /tmp/veridium/shib/idp-signing.crt /opt/veridiumid/shibboleth-idp/credentials/
cp /tmp/veridium/shib/sealer.jks /opt/veridiumid/shibboleth-idp/credentials/