Skip to main content
Skip table of contents

Upgrade VeridiumID from 3.8.0 to v3.8.2

This document will provide a step by step procedure to upgrade to VeridiumID 3.8.2.

It is recommended to take a snapshot for the servers before update.

The procedure will provide information regarding both update methods:

  • using a configured YUM repository

  • using local packages

WEBAPP node is a server where websecadmin is installed, PERSISTENCE node is a server where Cassandra is installed.

 

Summary:

1) Download packages

2) Pre-requirements

3) Start Update

4) Post update steps

5) Other references

 

1) Download packages

 

Package URL

MD5

SHA1

Description

Update Packages Archive RHEL8

e6494d071cfbeb54038a4823c69e3049

0f43c27d8fb7490e336258d7c3ee977c2f422c2a

VeridiumID Update packages archive containing all RPMs, for local update procedure RHEL8

Update Packages Archive RHEL9

5e7004cd026460d069103cfbe68c639b

8472bdc73006c6c93c9ee256965ac2cac34e7926

VeridiumID Update packages archive containing all RPMs, for local update procedure RHEL9

Download the package on the server and unzip it.

CODE
## download the package on each server; the below command can be used. Please fill in the proxy IP and username and password provided by Veridium.
## it is recommanded to execute these commands with the user that is going to do the installation.
## based on OS version, you have download the necessary package:
## check OS version, by running 
cat /etc/redhat-release
## RHEL8, Rocky8
export https_proxy=PROXY_IP:PROXY_PORT
wget --user NEXUS_USER --password NEXUS_PASSWORD https://veridium-repo.veridium-dev.com/repository/VeridiumUtils/Veridium-3.8.2-update/veridiumid-update-packages-rhel8-12.2.22.zip
## RHEL9, Rocky9
export https_proxy=PROXY_IP:PROXY_PORT
wget --user NEXUS_USER --password NEXUS_PASSWORD https://veridium-repo.veridium-dev.com/repository/VeridiumUtils/Veridium-3.8.2-update/veridiumid-update-packages-rhel9-12.2.22.zip

Other option is to upload the update package to local repository, based on the OS the client is using - RHEL8 or RHEL9.

2) Pre-requirements

2.1) (MANDATORY) User requirements

We recommend using any user with sudo rights or root directly.

Python 3 must be installed. To check if you have a working Python 3 version run the following command:

CODE
python3 --version

If Python 3 is not installed, please see section 5.1 - How to install python 3

3) Start Update

Please execute all commands as root or with a user that has sudo privileges.

3.1) Update using local packages

Execute below commands on all nodes, first on WEBAPP and later on PERSITENCE nodes. Please execute the update one by one servers, not in parallel.

CODE
TMP_DEST="/home/veridiumid/update382"
#### please choose the one that apply, based on your OS:
##RHEL8
unzip veridiumid-update-packages-rhel8-12.2.22.zip -d ${TMP_DEST}
##RHEL9
unzip veridiumid-update-packages-rhel9-12.2.22.zip -d ${TMP_DEST}

After this, update application:

CODE
TMP_DEST="/home/veridiumid/update382"
sudo yum localinstall -y ${TMP_DEST}/packages/veridiumid_update_procedure-12.2.22-20250813.x86_64.rpm
sudo python3 /etc/veridiumid/update-procedure/current/preUpdateSteps.py --version 12.2.22 --rpm-path ${TMP_DEST}/packages/
sudo python3 /etc/veridiumid/update-procedure/current/startUpdate.py --version 12.2.22 --rpm-path ${TMP_DEST}/packages/
sudo bash /etc/veridiumid/scripts/check_services.sh

 

3.2) Update using a YUM repository

Starting with version 3.7.2, it is used JAVA 17 version. Please install this package before the update.

CODE
## please check JAVA version
java --version
## PLEASE INSTALL JAVA 17 from local repositories, if not already installed; it should be OPENJDK distribution. Without this step the update will not be possible
sudo yum install java-17-openjdk -y
## Make sure that the old java version is still the default one, if not then configure it using the following command:
sudo update-alternatives --config java

Check if packages are visible in the repository. If the packages are not visible, please upload them into your repository, based on the OS you are using.

CODE
## check installed package
sudo yum list installed veridiumid_update_procedure
## check availability of the new package; if this package is not available, please fix the issue with the repository
sudo yum list available veridiumid_update_procedure-12.2.22-20250813

If the package is available, please execute below commands on all nodes, first on WEBAPP and later on PERSITENCE nodes. Please execute the update one by one servers, not in parallel.

CODE
sudo yum clean metadata
sudo yum install -y veridiumid_update_procedure-12.2.22
sudo python3 /etc/veridiumid/update-procedure/current/preUpdateSteps.py --version 12.2.22 --use-repo
sudo python3 /etc/veridiumid/update-procedure/current/startUpdate.py --version 12.2.22 --use-repo
sudo bash /etc/veridiumid/scripts/check_services.sh

 

4) Post update steps

4.1) MUST: This procedure will migrate all the data to Elasticsearch (devices, accounts) in order to have better reports.

CODE
##please run it on one PERSISTENCE node, regardless of how many datacenters.
sudo bash /opt/veridiumid/migration/bin/migrate_to_elk.sh

4.2) OPTIONAL: After updating all nodes, please update Cassandra from 4.0.9/4.1.4 to 5.0.2 on persistence nodes. Please execute the update one by one servers, not in parallel. This procedure might be with a downtime until executed on all nodes. If Cassandra was updated in a previous version, than no update is needed.

If update is done with local packages:

CODE
##check status - all nodes should be up - the status "UN" should be for everynode
/opt/veridiumid/cassandra/bin/nodetool describecluster
/opt/veridiumid/cassandra/bin/nodetool status
##run one time, on a single node run:
LUCENE_INDEXES=$(/opt/veridiumid/cassandra/bin/cqlsh --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc --ssl -e "desc keyspace veridium;" | grep INDEX | grep lucene | sed "s|CREATE CUSTOM INDEX ||g" | cut -d" " -f1)
/opt/veridiumid/cassandra/bin/cqlsh --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc --ssl -e "desc keyspace veridium;" | grep "CUSTOM" | grep -v SASI
## run on everynode
## if the version is 4.0.9 or 4.1.4, than update should be executed; the proper version is 5.0.2
TMP_DEST="/home/veridiumid/update382"
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_cassandra.sh ${TMP_DEST}/packages/
##check status - all nodes should be up again, in the cluster - the status "UN" should be for everynode
sudo /opt/veridiumid/cassandra/bin/nodetool status
sudo /opt/veridiumid/cassandra/bin/nodetool describecluster

If update is done with YUM repository:

CODE
##check status - all nodes should be up - the status "UN" should be for everynode
/opt/veridiumid/cassandra/bin/nodetool describecluster
/opt/veridiumid/cassandra/bin/nodetool status
##run one time, on a single node run:
LUCENE_INDEXES=$(/opt/veridiumid/cassandra/bin/cqlsh --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc --ssl -e "desc keyspace veridium;" | grep INDEX | grep lucene | sed "s|CREATE CUSTOM INDEX ||g" | cut -d" " -f1)
/opt/veridiumid/cassandra/bin/cqlsh --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc --ssl -e "desc keyspace veridium;" | grep "CUSTOM" | grep -v SASI
## run on everynode
/opt/veridiumid/cassandra/bin/nodetool describecluster
## if the version is 4.0.9 or 4.1.4, than update should be executed; the proper version is 5.0.2
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_cassandra.sh
##check status - all nodes should be up again, in the cluster - the status "UN" should be for everynode
sudo /opt/veridiumid/cassandra/bin/nodetool status
sudo /opt/veridiumid/cassandra/bin/nodetool describecluster

4.3) OPTIONAL: If “Error message: [es/index] failed: [mapper_parsing_exception] failed to parse field [authenticationDeviceOsPatch] of type [date] in document with id“ error appears in bops.log, the bellow procedure should be applied - this might appear only if updating from version 3.6

CODE
index=veridium.sessions-$(date '+%Y-%m')
/opt/veridiumid/migration/bin/elk_ops.sh --reindex --index-name=${index} --dest-index=${index}-001

4.4) OPTIONAL: run this step, only if KAFKA is installed (this is a step that needs to be executed only by clients that have ILP product upgraded to version 2.7.6 installed)

Please run the following procedure, on All Persistence Nodes, in parallel, first in DC1 and after that in DC2. Before switching to second DC2, please test if in first DC uba is working fine.

CODE
## check if kafka is installed
systemctl is-enabled uba-kafka
## if it is enabled, please run these commands:
## fix for noexec for tmp
if [ -d /opt/veridiumid/uba/kafka/tmp ]; then sed -i '16i export _JAVA_OPTIONS="-Djava.io.tmpdir=/opt/veridiumid/uba/kafka/tmp"' /opt/veridiumid/uba/kafka/bin/kafka-topics.sh; fi
if [ -d /opt/veridiumid/uba/kafka/tmp ]; then sed -i '16i export _JAVA_OPTIONS="-Djava.io.tmpdir=/opt/veridiumid/uba/kafka/tmp"' /opt/veridiumid/uba/kafka/bin/kafka-storage.sh; fi
##
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/decoupleKafkaFromZk.sh
## after this, please restart all ILP services on webapp nodes
uba_stop
uba_start

4.5) OPTIONAL: create zookeeper cluster and update properties to allow it to run in read only mode (for updates older than 3.7.2)

If you don’t have ILP installed, then you can proceed with this step.

If ILP is installed, then you need to update ILP to version 2.7.6 and then execute step 4.4 before continuing with this step.

Before starting this configuration, make sure that you have connectivity on ports 2888 and 3888 between ALL persistence nodes.

To test the connectivity run the following commands:

on DC1:

nc -zv IPNODEDC2 2888

nc -zv IPNODEDC2 3888

In case of single Datacenter, please run the following procedure, on All Persistence Nodes, sequentially. This apply also single/multi node installation.

CODE
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_zookeeper_configuration.sh

In case of CDCR, run the following procedure, to create one big cluster, with nodes from both datacenters. Previous command should not be executed in case of CDCR.

CODE

CODE
## run this command on main/active datacenter on one node in webapp. This generates a file DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -g
## copy the DC1.tar.gz to all nodes - webapp and persistence in both datacenters.
## run this command on all persistance in primary datacenter - the script will create a large cluster containing the Zookeeper nodes in both datacenters
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -z -a ${ARCH_PATH}
## run this command on all persistance in secondary datacenter - the script will create a large cluster containing the Zookeeper nodes in both datacenters and remove data from second DC
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -z -s -a ${ARCH_PATH}
## run this command on a single node in one datacenter - Process the cassandra connectors to have the all IPs and Upload modified Zookeeper configuraion (can be done just once in the primary datacenter).
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -j -a ${ARCH_PATH}
## run this command on all webapp nodes in secondary datacenter - this is changing the salt and password taken from DC1 into DC2.
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -p -r -a ${ARCH_PATH}

Copy

4.6) OPTIONAL: Update ELK stack (introduced in 3.8)

To update Elasticsearch Stack (Elasticsearch, Kibana, Filebeat) run the following command on ALL NODES starting with persistence nodes.

The script is updating first the number of shards from 0 to 1, so it might take around 5-10 minutes until the cluster become green. During the changing of shards number, the cluster might become yellow.

If update is done using local packages

CODE
TMP_DEST="/home/veridiumid/update382"
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh ${TMP_DEST}/packages/
## check if version is now 8.17.3
sudo /opt/veridiumid/elasticsearch/bin/elasticsearch --version
## After all nodes are updated to version 8.17.3, run the following command on all nodes (persistence + webapp) starting with persistence nodes
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh post
## usefull commands: if, yellow check the cluster status and shards allocation problems.
#eops -x=GET -p=/_cluster/allocation/explain
## 
#eops -x=PUT -p="/veridium.*/_settings" -d="{\"index\":{\"number_of_replicas\":0}}"
## comment execution of check_index_replicas

If update is done using YUM repository

CODE
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
## check if version is now 8.17.3
sudo /opt/veridiumid/elasticsearch/bin/elasticsearch --version
## After all nodes are updated to version 8.17.3, run the following command on all nodes (persistence + webapp) starting with persistence nodes
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh post
## usefull commands: if, yellow check the cluster status and shards allocation problems.
#eops -x=GET -p=/_cluster/allocation/explain
## 
#eops -x=PUT -p="/veridium.*/_settings" -d="{\"index\":{\"number_of_replicas\":0}}"
## comment execution of check_index_replicas

5) Other references.

5.1) How to install python 3

In order to run the update procedure all nodes must have Python 3 installed .

To check if the VeridiumID Python 3 package (this is optional) is present use the following command as root:

CODE
## on RHEL7/Centos7 it should be used python 3.7
python3 --version
##Python 3.7.8
yum -y install python3.7
## on RHEL8/RHEL9 it should be used python3.9
sudo yum -y install python39 python39-pip
##Python 3.9.18

Veridium REPO LINKS:

 

RHEL8 MD5 of each package:

Package URL

MD5

SHA1

Description

WebsecAdmin

8609b10f89617b95b7f2e2f6c4ffe748

f7d7bd5889e4bbb0afe8b37c23292bdbe0fcb5df

VeridiumID Admin Dashboard

Migration

d596f85af19edd0ebf022609b45eebcc

c91e252fb065a51d063625e5a084b2ca1101c779

VeridiumID migration tool

Websec

b6629cf1c6e8a9fd47066dc27b051184

76882502f69096367ec8ae543ec904033dc02a5c

VeridiumID Websec

AdService

ac5d3609e08a7dfc129aad0a98073a60

602ec5128b25773f6a990627a01b2936f9231770

VeridiumID Directory Service component

DMZ

fd5da97d1cb3e391ea41aa773cebd01b

a40b702840b8eb74c41a675fafc5e5bef69beb2a

VeridiumID DMZ service

Fido

8546cca4d5dc887126b2bfbf4af7e3d7

e374e4eff911cf0613829c400e9e1dbb9295e077

VeridiumID Fido service

OPA

15cd10330212cf2e63df5b33340fd068

9c492e109c1d33fd894e1be60e6481d07e0cb8c8

VeridiumID Open Policy Agent

Elasticsearch

d098b7305f6645d7d0f5c09499b5a006

adb016c08a3c5f45c91258ee6cf58cf209b0f5fd

VeridiumID Elasticsearch

Kibana

92771fa0ef77853050818c404f2c2efe

dabc0e691b33c03528bf5a20d6dabd90c33afc7a

VeridiumID Kibana

Zookeeper

4dc0ea6c8ad45d008716afc69e1c4420

5d64e2172ac0c04014fe179b971fb840fbb60d63

VeridiumID Zookeeper

Cassandra

d8a48cbc602bd479a158ef4712772375

85efd76ea8c16a7f6a2f04d2bc4b76841b0af487

VeridiumID Cassandra

Haproxy

c2de31c1c7fa35d5d2d67c4dc301e981

3ff1fa548deb44136c773afb5e660530e85892d8

VeridiumID Haproxy

SelfServicePortal

577c648cc9179d3bcb03dc864b08b49d

de157244f0f28f14d2c234ead154556bf7a6e3d1

VeridiumID Self Service Portal

Shibboleth

7298a8a8d581e48de333fbfb03721989

f4463f50ab3e21a506ced63acb0dc6ebe7d2cea2

VeridiumID Shibboleth Identity Provider

Tomcat

8a2d2802e2420f3ad31c4abd37ec3e8b

36b78fa1aed7a243a3c0767ed83c2abbdb7812ac

VeridiumID Tomcat

Setupagent

76fc1d3ca4680496068ba9f0e8d58e10

d73e60737300bd09f6802be74a34f5b6dbff7103

VeridiumID Setupagent

Freeradius

6720c0400bea2f861a83d7dc5a832f16

34b27375327da63be64d2ede3115f811c9208b45

VeridiumID FreeRadius

4F

80c95368e08c8b9a265519a87f0d8a2f

9cb3da85c85613dbfe453a69ff74eeda53e42e5c

VeridiumID 4F biometric library

VFace

b6a426fa388dfe4c897581e2fec037a1

5ca53f08d6bed6209090b382aff32079b8c23f95

VeridiumID VFace biometric library

Update procedure RPM

367617d5a42a02f59f2c91c12773c6ea

2053e6d52f02863323c1bbfb2c6645ed81b0a3ca

Update scripts

RHEL9 MD5 of each package:

Package URL

MD5

SHA1

Description

WebsecAdmin

67ac0cb821be70cb49f15c695b2b20c6

d7443ec804bcd772c02d98c63cf24aacdb089c44

VeridiumID Admin Dashboard

Migration

aab02a9c0d9d8569c1952bb32cb8b67d

5a16731f0c78bb9e6f5fae1b1abe67e9bf6ea088

VeridiumID migration tool

Websec

0b08fb49fe5a82ba14893535d7bb494a

dc49c7fe79bda4effe07c14a50fe4ee0e3b901ed

VeridiumID Websec

AdService

d0a6d83b86a9ec03a47284b3613e2f3e

5f19e5c7bbf8bb61dacb1416c7bcb5804913d6ca

VeridiumID Directory Service component

DMZ

9d32f85f7e52b235ae8fb012676b7bf5

a5895980290842f34c6d5e3eee5a21b9939398dd

VeridiumID DMZ service

Fido

11a88ae75bfacc875a71772c55e5fbc8

4fd5cb0dc4e262e25b66cb1ff4018b5e667f86a1

VeridiumID Fido service

OPA

6d1ec31c5753f54a25b78e4b5cb94c22

4ba892138c28f8651826bb99f8b074f1e466db54

VeridiumID Open Policy Agent

Elasticsearch

6a0e969560fbbac6786f7e82379683b9

2eead16377c5f98b96b06092b8842b4cf12b8a75

VeridiumID Elasticsearch

Kibana

9dd1d31a91ab62943998dcd0f26fe1fd

75022fd7a3fe301635885e6437b343ec592338c8

VeridiumID Kibana

Zookeeper

95ac9be1b01abbbd713b02d380134073

0929f6a36eefcd659b14fb9aba7a803f128f26e4

VeridiumID Zookeeper

Cassandra

074983efaa18e333dcb57aab7ccc38a7

1572098293d7e6a71000298686eb60005c0a3d23

VeridiumID Cassandra

Haproxy

2b9b521a2b6cfe151a4cc321613921de

7fb09490aa220ed80156d49815512415b85003a5

VeridiumID Haproxy

SelfServicePortal

971bfa4ee5b1f0d2200883900f9cffb1

f013363a501802e98680357ba60cdb70ee1049da

VeridiumID Self Service Portal

Shibboleth

2457a92a2d4b1db9b2246c508b7275e9

2ba4a7b32f497fea994ae6ee3f9251cb291c0edb

VeridiumID Shibboleth Identity Provider

Tomcat

7e6b4893a75ab2e46554c55f83591301

db5afcdf6742691f4277a6518b1b4226cabc3aef

VeridiumID Tomcat

Setupagent

61bb5df33d16836b467ef8e511820306

fd479e4720b6205d082e58f008a3065665a428e4

VeridiumID Setupagent

Freeradius

43ea10767c368fa1810e4598d3d6c55d

fe649a09f4b8cffc705c965b3a37cf5852534789

VeridiumID FreeRadius

4F

2e5c6089204c9937220ac433ee043335

58dfe2c9e684ee9e52603b6f3b9974c902327121

VeridiumID 4F biometric library

VFace

b52aff727b1ab384ae4bca9357cd0bd6

0d91d43c2782543898ac2c472e3b51d630870e72

VeridiumID VFace biometric library

Update procedure RPM

0d722c39fba4ae6b2b477c930d80cc5d

0dfd32738efd2863add6cffd5301c9519cf5aa51

Update scripts

 

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.