Skip to main content
Skip table of contents

Upgrade VeridiumID from 3.7.1/3.8.0 to v3.8.1

This document will provide a step by step procedure to upgrade to VeridiumID 3.8.1.

It is recommended to take a snapshot for the servers before update.

The procedure will provide information regarding both update methods:

  • using a configured YUM repository

  • using local packages

The update is having a small service interruption. The old version is not fully compatible with the new database schema version.

So after the cassandra migration (on first node), the QR will not be displayed in the not updated WEBAPP node. In order to minimize the impact, after cassandra migration, it should be restart all applications on not updated server, by running commands: systemctl restart ver_tomcat, systemctl restart ver_websecadmin, systemctl restart ver_selfservice.

WEBAPP node is a server where websecadmin is installed, PERSISTENCE node is a server where Cassandra is installed.

 

Summary:

1) Download packages

2) Pre-requirements

3) Start Update

4) Post update steps

5) Other references

 

1) Download packages

 

Package URL

MD5

SHA1

Description

Update Packages Archive RHEL8

ef2eccb6e4957768f5d0606533bcef25

fd8e689eb977506c9b0aef8370595d2fb1061165

VeridiumID Update packages archive containing all RPMs, for local update procedure RHEL8

Update Packages Archive RHEL9

5ee02a14c5528b8082762c7ac73484d1

de761826e4fffc164340d7401064df2e2f3cf58c

VeridiumID Update packages archive containing all RPMs, for local update procedure RHEL9

Download the package on the server and unzip it.

CODE
## download the package on each server; the below command can be used. Please fill in the proxy IP and username and password provided by Veridium.
## it is recommanded to execute these commands with the user that is going to do the installation.
## based on OS version, you have download the necessary package:
## check OS version, by running 
cat /etc/redhat-release
## RHEL8, Rocky8
export https_proxy=PROXY_IP:PROXY_PORT
wget --user NEXUS_USER --password NEXUS_PASSWORD https://veridium-repo.veridium-dev.com/repository/VeridiumUtils/Veridium-3.8.1-update/veridiumid-update-packages-rhel8-12.1.42.zip
## RHEL9, Rocky9
export https_proxy=PROXY_IP:PROXY_PORT
wget --user NEXUS_USER --password NEXUS_PASSWORD https://veridium-repo.veridium-dev.com/repository/VeridiumUtils/Veridium-3.8.1-update/veridiumid-update-packages-rhel9-12.1.42.zip

Other option is to upload the update package to local repository, based on the OS the client is using - RHEL8 or RHEL9.

2) Pre-requirements

2.1) (MANDATORY) User requirements

We recommend using any user with sudo rights or root directly.

Python 3 must be installed. To check if you have a working Python 3 version run the following command:

CODE
python3 --version

If Python 3 is not installed, please see section 5.1 - How to install python 3

3) Start Update

Please execute all commands as root or with a user that has sudo privileges.

3.1) Update using local packages

Execute below commands on all nodes, first on WEBAPP and later on PERSITENCE nodes. Please execute the update one by one servers, not in parallel.

CODE
TMP_DEST="/home/veridiumid/update381"
#### please choose the one that apply, based on your OS:
##RHEL8
unzip veridiumid-update-packages-rhel8-12.1.42.zip -d ${TMP_DEST}
##RHEL9
unzip veridiumid-update-packages-rhel9-12.1.42.zip -d ${TMP_DEST}

After this, update application:

CODE
TMP_DEST="/home/veridiumid/update381"
sudo yum localinstall -y ${TMP_DEST}/packages/veridiumid_update_procedure-12.1.42-20250708.x86_64.rpm
sudo python3 /etc/veridiumid/update-procedure/current/preUpdateSteps.py --version 12.1.42 --rpm-path ${TMP_DEST}/packages/
sudo python3 /etc/veridiumid/update-procedure/current/startUpdate.py --version 12.1.42 --rpm-path ${TMP_DEST}/packages/
sudo bash /etc/veridiumid/scripts/check_services.sh

 

3.2) Update using a YUM repository

Starting with version 3.7.2, it is used JAVA 17 version. Please install this package before the update.

CODE
## please check JAVA version
java --version
## PLEASE INSTALL JAVA 17 from local repositories, if not already installed; it should be OPENJDK distribution. Without this step the update will not be possible
sudo yum install java-17-openjdk -y
## Make sure that the old java version is still the default one, if not then configure it using the following command:
sudo update-alternatives --config java

Check if packages are visible in the repository. If the packages are not visible, please upload them into your repository, based on the OS you are using.

CODE
## check installed package
sudo yum list installed veridiumid_update_procedure
## check availability of the new package; if this package is not available, please fix the issue with the repository
sudo yum list available veridiumid_update_procedure-12.1.42-20250708

If the package is available, please execute below commands on all nodes, first on WEBAPP and later on PERSITENCE nodes. Please execute the update one by one servers, not in parallel.

CODE
sudo yum clean metadata
sudo yum install -y veridiumid_update_procedure-12.1.42
sudo python3 /etc/veridiumid/update-procedure/current/preUpdateSteps.py --version 12.1.42 --use-repo
sudo python3 /etc/veridiumid/update-procedure/current/startUpdate.py --version 12.1.42 --use-repo
sudo bash /etc/veridiumid/scripts/check_services.sh

 

4) Post update steps

4.1) MUST: This procedure will migrate all the data to Elasticsearch (devices, accounts) in order to have better reports.

CODE
##please run it on one PERSISTENCE node, regardless of how many datacenters.
sudo sed -i "s|if \[\[ \"\${prev_version}\" != \"11.1\" \]\] \&\& \[\[ \"\${prev_version}\" != \"11.2\" \]\]|if [[ \"\${prev_version}\" != \"11.1\" ]] \&\& [[ \"\${prev_version}\" != \"11.2\" ]] \&\& [[ \"\${prev_version}\" != \"12.0\" ]]|g" /opt/veridiumid/migration/bin/migrate_to_elk.sh
sudo bash /opt/veridiumid/migration/bin/migrate_to_elk.sh

4.2) OPTIONAL: After updating all nodes, please update Cassandra from 4.0.9/4.1.4 to 5.0.2 on persistence nodes. Please execute the update one by one servers, not in parallel. This procedure might be with a downtime until executed on all nodes. If Cassandra was updated in a previous version, than no update is needed.

If update is done with local packages:

CODE
##check status - all nodes should be up - the status "UN" should be for everynode
/opt/veridiumid/cassandra/bin/nodetool describecluster
/opt/veridiumid/cassandra/bin/nodetool status

##run one time, on a single node run:
LUCENE_INDEXES=$(/opt/veridiumid/cassandra/bin/cqlsh --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc --ssl -e "desc keyspace veridium;" | grep INDEX | grep lucene | sed "s|CREATE CUSTOM INDEX ||g" | cut -d" " -f1)
/opt/veridiumid/cassandra/bin/cqlsh --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc --ssl -e "desc keyspace veridium;" | grep "CUSTOM" | grep -v SASI

## run on everynode
## if the version is 4.0.9 or 4.1.4, than update should be executed; the proper version is 5.0.2
TMP_DEST="/home/veridiumid/update381"
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_cassandra.sh ${TMP_DEST}/packages/
##check status - all nodes should be up again, in the cluster - the status "UN" should be for everynode
sudo /opt/veridiumid/cassandra/bin/nodetool status
sudo /opt/veridiumid/cassandra/bin/nodetool describecluster

If update is done with YUM repository:

CODE
##check status - all nodes should be up - the status "UN" should be for everynode
/opt/veridiumid/cassandra/bin/nodetool describecluster
/opt/veridiumid/cassandra/bin/nodetool status

##run one time, on a single node run:
LUCENE_INDEXES=$(/opt/veridiumid/cassandra/bin/cqlsh --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc --ssl -e "desc keyspace veridium;" | grep INDEX | grep lucene | sed "s|CREATE CUSTOM INDEX ||g" | cut -d" " -f1)
/opt/veridiumid/cassandra/bin/cqlsh --cqlshrc=/opt/veridiumid/cassandra/conf/veridiumid_cqlshrc --ssl -e "desc keyspace veridium;" | grep "CUSTOM" | grep -v SASI

## run on everynode

/opt/veridiumid/cassandra/bin/nodetool describecluster
## if the version is 4.0.9 or 4.1.4, than update should be executed; the proper version is 5.0.2
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_cassandra.sh
##check status - all nodes should be up again, in the cluster - the status "UN" should be for everynode
sudo /opt/veridiumid/cassandra/bin/nodetool status
sudo /opt/veridiumid/cassandra/bin/nodetool describecluster

4.3) OPTIONAL: If “Error message: [es/index] failed: [mapper_parsing_exception] failed to parse field [authenticationDeviceOsPatch] of type [date] in document with id“ error appears in bops.log, the bellow procedure should be applied - this might appear only if updating from version 3.6

CODE
index=veridium.sessions-$(date '+%Y-%m')
/opt/veridiumid/migration/bin/elk_ops.sh --reindex --index-name=${index} --dest-index=${index}-001

4.4) OPTIONAL: run this step, only if KAFKA is installed (this is a step that needs to be executed only by clients that have ILP product upgraded to version 2.7.6 installed)

Please run the following procedure, on All Persistence Nodes, in parallel, first in DC1 and after that in DC2. Before switching to second DC2, please test if in first DC uba is working fine.

CODE
## check if kafka is installed
systemctl is-enabled uba-kafka
## if it is enabled, please run these commands:
## fix for noexec for tmp
if [ -d /opt/veridiumid/uba/kafka/tmp ]; then sed -i '16i export _JAVA_OPTIONS="-Djava.io.tmpdir=/opt/veridiumid/uba/kafka/tmp"' /opt/veridiumid/uba/kafka/bin/kafka-topics.sh; fi
if [ -d /opt/veridiumid/uba/kafka/tmp ]; then sed -i '16i export _JAVA_OPTIONS="-Djava.io.tmpdir=/opt/veridiumid/uba/kafka/tmp"' /opt/veridiumid/uba/kafka/bin/kafka-storage.sh; fi
##
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/decoupleKafkaFromZk.sh
## after this, please restart all ILP services on webapp nodes
uba_stop
uba_start

4.5) OPTIONAL: create zookeeper cluster and update properties to allow it to run in read only mode (for updates older than 3.7.2)

If you don’t have ILP installed, then you can proceed with this step.

If ILP is installed, then you need to update ILP to version 2.7.6 and then execute step 4.4 before continuing with this step.

Before starting this configuration, make sure that you have connectivity on ports 2888 and 3888 between ALL persistence nodes.

To test the connectivity run the following commands:

on DC1:

nc -zv IPNODEDC2 2888

nc -zv IPNODEDC2 3888

In case of single Datacenter, please run the following procedure, on All Persistence Nodes, sequentially. This apply also single/multi node installation.

CODE
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_zookeeper_configuration.sh

In case of CDCR, run the following procedure, to create one big cluster, with nodes from both datacenters. Previous command should not be executed in case of CDCR.

CODE
## run this command on main/active datacenter on one node in webapp. This generates a file DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -g
## copy the DC1.tar.gz to all nodes - webapp and persistence in both datacenters.
## run this command on all persistance in primary datacenter - the script will create a large cluster containing the Zookeeper nodes in both datacenters
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -z -a ${ARCH_PATH}
## run this command on all persistance in secondary datacenter - the script will create a large cluster containing the Zookeeper nodes in both datacenters and remove data from second DC
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -z -s -a ${ARCH_PATH}
## run this command on a single node in one datacenter - Process the cassandra connectors to have the all IPs and Upload modified Zookeeper configuraion (can be done just once in the primary datacenter).
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -j -a ${ARCH_PATH}
## run this command on all webapp nodes in secondary datacenter - this is changing the salt and password taken from DC1 into DC2.
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -p -r -a ${ARCH_PATH}

4.6) OPTIONAL: Update ELK stack (introduced in 3.8)

To update ElasticSearch Stack (ElasticSearch, Kibana, Filebeat) run the following command on ALL NODES starting with persistence nodes.

The script is updating first the number of shards from 0 to 1, so it might take around 5-10 minutes until the cluster become green. During the changing of shards number, the cluster might become yellow.

If update is done using local packages

CODE
TMP_DEST="/home/veridiumid/update381"
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh ${TMP_DEST}/packages/
## check if version is now 8.17.3
sudo /opt/veridiumid/elasticsearch/bin/elasticsearch --version

## After all nodes are updated to version 8.17.3, run the following command on all nodes (persistence + webapp) starting with persistence nodes
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh post

## usefull commands: if, yellow check the cluster status and shards allocation problems.
#eops -x=GET -p=/_cluster/allocation/explain
## 
#eops -x=PUT -p="/veridium.*/_settings" -d="{\"index\":{\"number_of_replicas\":0}}"
## comment execution of check_index_replicas

If update is done using YUM repository

CODE
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
## check if version is now 8.17.3
sudo /opt/veridiumid/elasticsearch/bin/elasticsearch --version

## After all nodes are updated to version 8.17.3, run the following command on all nodes (persistence + webapp) starting with persistence nodes
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh post

## usefull commands: if, yellow check the cluster status and shards allocation problems.
#eops -x=GET -p=/_cluster/allocation/explain
## 
#eops -x=PUT -p="/veridium.*/_settings" -d="{\"index\":{\"number_of_replicas\":0}}"
## comment execution of check_index_replicas

5) Other references.

5.1) How to install python 3

In order to run the update procedure all nodes must have Python 3 installed .

To check if the VeridiumID Python 3 package (this is optional) is present use the following command as root:

CODE
## on RHEL7/Centos7 it should be used python 3.7
python3 --version
##Python 3.7.8
yum -y install python3.7
## on RHEL8/RHEL9 it should be used python3.9
sudo yum -y install python39 python39-pip
##Python 3.9.18

 

Veridium REPO LINKS:

 

RHEL8 MD5 of each package:

Package URL

MD5

SHA1

Description

WebsecAdmin

ecb2816af46049d561089cb3a8264210

24de5123b7bffb14a05bab376c871abd67ac3ed0

VeridiumID Admin Dashboard

Migration

20869cf4ced54770af91e2ac6d45697a

d20cf5dc7740747ac8529d7af7afb5e1302f2004

VeridiumID migration tool

Websec

e63effb79f0ee5e49709f106c8de9f4f

4f6834f692309cb2e292ea1386e500a76c186cf0

VeridiumID Websec

AdService

c0bd7850f2f21b219816b1f9936df77e

9a24a541199a48f3404984933453e745b0642c62

VeridiumID Directory Service component

DMZ

63e173df7f24b70629e1a0749def2ad2

b1bb1324a34bb4e5226d7b45b6f7a1522e9bc926

VeridiumID DMZ service

Fido

0517c672ba4a50c1ca7141b4c2012288

c2e9f3e536bbe6307b465057bdd55f51e26622fd

VeridiumID Fido service

OPA

e00e42e51e93deaefc05d5b418a9b07d

80bbcc8ed5b615e0f9ef1881bba6d0f3f9539c27

VeridiumID Open Policy Agent

Elasticsearch

7f74c5d8c9a375f8cfa5c52f6ea9e166

b16c3c0380a48efefdf3e7ff80f8a037a72661d5

VeridiumID Elasticsearch

Kibana

742234fb01d837fd9f9c98b13fccc72a

273a83d1e5095e9afb77f4d2328ba13bc188f5ac

VeridiumID Kibana

Zookeeper

a105222758541446ef2dafb0eb3b9391

db38acc37344081e8ef36f33f872196b48dcc1f0

VeridiumID Zookeeper

Cassandra

6cf517f32036e37638438043dc29e561

2025e85d67e188672968e6958c1404d45999b65e

VeridiumID Cassandra

Haproxy

ec66449663439891afe4a29e1e5ece0d

3fb0621696ae65b3309bf76cdeec0ae673c99942

VeridiumID Haproxy

SelfServicePortal

b5eea6d68956103a2108553dc548a8e0

6ed46178a1455b7b901e1d9cd280f22fc2a92e67

VeridiumID Self Service Portal

Shibboleth

3805e570a59c12cd6080af45c22b480e

8bee2628c6047d9fc365e00558705f11d1d4a7ac

VeridiumID Shibboleth Identity Provider

Tomcat

f4d5c2566ca3baa306df19fa190ab801

da80b5eb5f1318d3f98be7754ced410d24a485a4

VeridiumID Tomcat

Freeradius

e717ebf3353ea79eeada8ab410769805

40f6755ea8179ee85fe046135747f1b0740ee9c0

VeridiumID FreeRadius

4F

96b5a6d115395e33cda7222079a737fe

387526709b54e13e366e54732511511b0144c9d1

VeridiumID 4F biometric library

VFace

816c79b0582c158dc2c9de4ca0871318

5c6fbb18077d25a89f20cc7cea29518b976ea2c0

VeridiumID VFace biometric library

Setupagent

08149d7c08c1a103a0fe42269025bc53

b12c0f486169ebba5e788c079effded99b9fe3b6

VeridiumID Setup Agent

Update procedure RPM

5300dd15d2cb28ad27bc5020715a017d

81c60d2fb3d9db61e82fddd3d41a645078a434e4

Update scripts

RHEL9 MD5 of each package:

Package URL

MD5

SHA1

Description

WebsecAdmin

cb2ae24cf810b8bc57b90136d6dd487b

451b45833b256cbaf3c2ddae262b7cb7d9158be4

VeridiumID Admin Dashboard

Migration

bda26bde231ae631fe23bbbd2837b754

92a2524600817546d67866b0a72dcb7b75bdc2cf

VeridiumID migration tool

Websec

e3d6f1cb7861540b80750bf62cd367c2

a2846c6e691d9ecc6b234cd6bf200ad54aef6460

VeridiumID Websec

AdService

ef85f77e20b4fb1f66cdf1a96eef6bb3

18b27c9e4fa230d4d0382a428d7b7d0f324be18b

VeridiumID Directory Service component

DMZ

ba8dd30052cf8e365ddde31785a6af6b

8b9d79cd0043e11c817de1f9a5e2146a1c201307

VeridiumID DMZ service

Fido

83cec697debb9a9af2b13fa08a4dbd35

5858ff2944a28c9e5ed2fc65428edc3d9673f852

VeridiumID Fido service

OPA

f3b14017a96ad05767765aaf35b5891a

8b8999a414c5ed3cad3b7e23b773a0752683f4c4

VeridiumID Open Policy Agent

Elasticsearch

5eee82c56f67e13d43e3b7b74bc04b53

dfc0cad5b79b75ee8e04b8f483de275ac35aa27b

VeridiumID Elasticsearch

Kibana

210388a7f9be078077b457ffa007fc59

d0c638c2b3a31e2388e7b5a8c81d00f5fa3e58d2

VeridiumID Kibana

Zookeeper

952d70138a45214d959f2dce71241e6a

62bdd2b2c5f0ade16671e6d726602cf4bf09c3b4

VeridiumID Zookeeper

Cassandra

456b10b2a73a698a811cae0604971037

fa74e96ab1d3f978446be7851c0814d4d83279a9

VeridiumID Cassandra

Haproxy

eb014a8002793c815cc88681c0932269

25d2157d78edabbab146528dfe22b2ae0c979aaa

VeridiumID Haproxy

SelfServicePortal

e286aa6b04755a22a443b8a3cc3677c1

a77bb29c7f35f57e832748947590cbcabd1fab96

VeridiumID Self Service Portal

Shibboleth

b3461246c010fa6f6a549626cb61cf75

4244447ce9f9c241fde395ab01e1f90ba7803a18

VeridiumID Shibboleth Identity Provider

Tomcat

bac589e4384221ecfcf6bdba9f047f8c

cc2de6cbda9371b2a78d9f06e4c95ce8d1f07f69

VeridiumID Tomcat

Freeradius

4e8d088bcc92f33c033079564a64d5b9

8e3f696bb132d7c9edb4e1e743a85001f43e5c6b

VeridiumID FreeRadius

4F

dbc990438ed0f042ef42b12f0acb5b41

b9b1b397f64402f7685681ad079b19b58b2e3791

VeridiumID 4F biometric library

VFace

ceca0885038af2b5a8a795895d6aa4bf

e2d140d3eb59e079ef31f229471b739e81f0ad47

VeridiumID VFace biometric library

Setupagent

7dd9f2bc5732657a8273df21e3f47102

458635b1774011ebd4bef09dbd26e89c9a77fc62

VeridiumID Setup Agent

Update procedure RPM

58b9d29b3ceb6cf3fed996462062cccf

a440c97217af66493e853b816d2d0eec3401ad57

Update scripts

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.