Skip to main content
Skip table of contents

Upgrade VeridiumID from 3.7.x / 3.8.x to v3.8.3

This document will provide a step by step procedure to upgrade to VeridiumID 3.8.3.

It is recommended to take a snapshot for the servers before update.

The procedure will provide information regarding both update methods:

  • using a configured YUM repository

  • using local packages

WEBAPP node is a server where websecadmin is installed, PERSISTENCE node is a server where Cassandra is installed.

 

Summary:

1) Download packages

2) Pre-requirements

3) Start Update

4) Post update steps

5) Other references

 

1) Download packages

 

Enlarges the table by opening it in a full screen dialogOpen

Package URL

MD5

SHA1

Description

Update Packages Archive RHEL8

2e9072f8e2115c0ce6d6bfcbe738f8e3

32465ad06b7e7f8ce227d07defb44ca39d734a35

VeridiumID Update packages archive containing all RPMs, for local update procedure RHEL8

Update Packages Archive RHEL9

4c03508b6dc2245888f2c956797ade1b

353d598e18d764a6f0a6ab2339251cce52a62600

VeridiumID Update packages archive containing all RPMs, for local update procedure RHEL9

Download the package on the server and unzip it.

CODE
## download the package on each server; the below command can be used. Please fill in the proxy IP and username and password provided by Veridium.
## it is recommanded to execute these commands with the user that is going to do the installation.
## based on OS version, you have download the necessary package:
## check OS version, by running 
cat /etc/redhat-release
## RHEL8, Rocky8
export https_proxy=PROXY_IP:PROXY_PORT
wget --user NEXUS_USER --password NEXUS_PASSWORD https://veridium-repo.veridium-dev.com/repository/VeridiumUtils/Veridium-3.8.3-update/veridiumid-update-packages-rhel8-12.3.16.zip
## RHEL9, Rocky9
export https_proxy=PROXY_IP:PROXY_PORT
wget --user NEXUS_USER --password NEXUS_PASSWORD https://veridium-repo.veridium-dev.com/repository/VeridiumUtils/Veridium-3.8.3-update/veridiumid-update-packages-rhel9-12.3.16.zip

Other option is to upload the update package to local repository, based on the OS the client is using - RHEL8 or RHEL9.

2) Pre-requirements

2.1) (MANDATORY) User requirements

We recommend using any user with sudo rights or root directly.

Python 3 must be installed. To check if you have a working Python 3 version run the following command:

CODE
python3 --version

3) Start Update

Please execute all commands as root or with a user that has sudo privileges.

3.1) Update using local packages

Execute below commands on all nodes, first on WEBAPP and later on PERSITENCE nodes. Please execute the update one by one servers, not in parallel.

CODE
TMP_DEST="/home/veridiumid/update383"
#### please choose the one that apply, based on your OS:
##RHEL8
unzip veridiumid-update-packages-rhel8-12.3.16.zip -d ${TMP_DEST}
##RHEL9
unzip veridiumid-update-packages-rhel9-12.3.16.zip -d ${TMP_DEST}

After this, update application:

CODE
TMP_DEST="/home/veridiumid/update383"
sudo yum localinstall -y ${TMP_DEST}/packages/veridiumid_update_procedure-12.3.16-20250828.x86_64.rpm
sudo python3 /etc/veridiumid/update-procedure/current/preUpdateSteps.py --version 12.3.16 --rpm-path ${TMP_DEST}/packages/
sudo python3 /etc/veridiumid/update-procedure/current/startUpdate.py --version 12.3.16 --rpm-path ${TMP_DEST}/packages/
sudo bash /etc/veridiumid/scripts/check_services.sh

 

3.2) Update using a YUM repository

Starting with version 3.7.2, it is used JAVA 17 version. Please install this package before the update.

CODE
## please check JAVA version
java --version
## PLEASE INSTALL JAVA 17 from local repositories, if not already installed; it should be OPENJDK distribution. Without this step the update will not be possible
sudo yum install java-17-openjdk -y
## Make sure that the old java version is still the default one, if not then configure it using the following command:
sudo update-alternatives --config java

Check if packages are visible in the repository. If the packages are not visible, please upload them into your repository, based on the OS you are using.

CODE
## check installed package
sudo yum list installed veridiumid_update_procedure
## check availability of the new package; if this package is not available, please fix the issue with the repository
sudo yum list available veridiumid_update_procedure-12.3.16-20250828

If the package is available, please execute below commands on all nodes, first on WEBAPP and later on PERSITENCE nodes. Please execute the update one by one servers, not in parallel.

CODE
sudo yum clean metadata
sudo yum install -y veridiumid_update_procedure-12.3.16
sudo python3 /etc/veridiumid/update-procedure/current/preUpdateSteps.py --version 12.3.16 --use-repo
sudo python3 /etc/veridiumid/update-procedure/current/startUpdate.py --version 12.3.16 --use-repo
sudo bash /etc/veridiumid/scripts/check_services.sh

 

4) Post update steps

4.1) MUST: This procedure will migrate all the data to Elasticsearch (devices, accounts) in order to have better reports.

CODE
##please run it on one PERSISTENCE node, regardless of how many datacenters.
sudo bash /opt/veridiumid/migration/bin/migrate_to_elk.sh

4.2) MUST: After updating all nodes, please update Cassandra from 4.0.9/4.1.4 to 5.0.2 on persistence nodes. Please execute the update one by one servers, not in parallel. This procedure might be with a downtime until executed on all nodes. If Cassandra was updated in a previous version, than no update is needed.

If update is done with local packages:

CODE

CODE
##check status - all nodes should be up - the status "UN" should be for everynode
/opt/veridiumid/cassandra/bin/nodetool describecluster
/opt/veridiumid/cassandra/bin/nodetool status
## if the version is 4.0.9 or 4.1.4, than update should be executed; the proper version is 5.0.2
TMP_DEST="/home/veridiumid/update383"
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_cassandra.sh ${TMP_DEST}/packages/
##check status - all nodes should be up again, in the cluster - the status "UN" should be for everynode
sudo /opt/veridiumid/cassandra/bin/nodetool status
sudo /opt/veridiumid/cassandra/bin/nodetool describecluster

If update is done with YUM repository:

CODE
##check status - all nodes should be up - the status "UN" should be for everynode
/opt/veridiumid/cassandra/bin/nodetool describecluster
/opt/veridiumid/cassandra/bin/nodetool status
## run on everynode
/opt/veridiumid/cassandra/bin/nodetool describecluster
## if the version is 4.0.9 or 4.1.4, than update should be executed; the proper version is 5.0.2
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_cassandra.sh
##check status - all nodes should be up again, in the cluster - the status "UN" should be for everynode
sudo /opt/veridiumid/cassandra/bin/nodetool status
sudo /opt/veridiumid/cassandra/bin/nodetool describecluster

 

4.3) OPTIONAL - update properties to allow it to run in read only mode and for CDCR clients - create zookeeper cluster

Before starting this configuration, make sure that you have connectivity on ports 2888 and 3888 between ALL persistence nodes.

To test the connectivity run the following commands:

on DC1:

nc -zv IPNODEDC2 2888

nc -zv IPNODEDC2 3888

In case of single Datacenter, please run the following procedure, on All Persistence Nodes, sequentially. This apply also single/multi node installation.

CODE
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_zookeeper_configuration.sh

In case of CDCR, run the following procedure, to create one big cluster, with nodes from both datacenters. Previous command should not be executed in case of CDCR.

CODE
## run this command on main/active datacenter on one node in webapp. This generates a file DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -g
## copy the DC1.tar.gz to all nodes - webapp and persistence in both datacenters.
## run this command on all persistance in primary datacenter - the script will create a large cluster containing the Zookeeper nodes in both datacenters
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -z -a ${ARCH_PATH}
## run this command on all persistance in secondary datacenter - the script will create a large cluster containing the Zookeeper nodes in both datacenters and remove data from second DC
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -z -s -a ${ARCH_PATH}
## run this command on a single node in one datacenter - Process the cassandra connectors to have the all IPs and Upload modified Zookeeper configuraion (can be done just once in the primary datacenter).
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -j -a ${ARCH_PATH}
## run this command on all webapp nodes in secondary datacenter - this is changing the salt and password taken from DC1 into DC2.
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -p -r -a ${ARCH_PATH}

 

4.4) MUST: Update ELK stack (introduced in 3.8)

To update Elasticsearch Stack (Elasticsearch, Kibana, Filebeat) run the following command on ALL NODES starting with persistence nodes, one node at a time, not in parallel.

If update is done using local packages

CODE
## this run on all nodes, first persistence than webapp:
TMP_DEST="/home/veridiumid/update383"
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh ${TMP_DEST}/packages/
## check if version is now 8.17.3
sudo /opt/veridiumid/elasticsearch/bin/elasticsearch --version
## here the elasticsearch cluster might be red
## After all persistance nodes are updated to version 8.17.3, run the following command on all nodes (persistence + webapp) starting with persistence nodes
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh post
##check elasticsearch cluster status that is in green status.
##run on one node: 
/opt/veridiumid/migration/bin/elk_ops.sh --update-settings

If update is done using YUM repository

CODE
## this run on all nodes, first persistence than webapp:
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
## check if version is now 8.17.3
sudo /opt/veridiumid/elasticsearch/bin/elasticsearch --version
## here the elasticsearch cluster might be red
## After all nodes are updated to version 8.17.3, run the following command on all nodes (persistence + webapp) starting with persistence nodes
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh post
##check elasticsearch cluster status that is in green status.
##run on one node: 
/opt/veridiumid/migration/bin/elk_ops.sh --update-settings

Veridium REPO LINKS:

Enlarges the table by opening it in a full screen dialogOpen

 

RHEL8 MD5 of each package:

Package URL

MD5

SHA1

Description

WebsecAdmin

b26be51ca48ae04c8d70c169c74eb1cf

14baaf2a864c3099f5edbfddc3379a76181f6ccc

VeridiumID Admin Dashboard

Migration

f2386d1d3f73c71419e940d09fbb6ae6

e29959e0c22383fa0c52de9874932f946e053e1f

VeridiumID migration tool

Websec

688eaf8a5aa836cc243b35842aa366c8

8777cd022095bbc543146f3aa6242521d67e0efb

VeridiumID Websec

AdService

1273e74328ad76abc56f88a494775baa

d90db681641eb796d6ca9508ddb2672828987475

VeridiumID Directory Service component

DMZ

03a727c5c8a016016d01d668e64b33ff

09cbf1620acb4b5e083c32a51f289712b697a452

VeridiumID DMZ service

Fido

faf55fd60715d83879a08de48e887145

8303ab5a5726ce0e003f0d355aa099f85daa395d

VeridiumID Fido service

OPA

4766d8dc8870f04bb1a882eea1f8863b

a59f12d3eabe43ecd6c47a9fedc646a9b1d3ca4f

VeridiumID Open Policy Agent

Elasticsearch

da55ff345aa2a5257c1c82e5c49d0474

f467ef6658984498bca6a8b4d973f0621fc7b541

VeridiumID Elasticsearch

Kibana

98e9a2abb11bef66943fdde70e32d60a

f1c23f84a5947a7207bdb0913c0578656c35c52f

VeridiumID Kibana

Zookeeper

b0c4103e94597fd4a04db1327c31fa15

12fcce731934bd871b62320625e2a3fe0758437e

VeridiumID Zookeeper

Cassandra

e01bf6d0062a18d9c479d58b8b757426

637d1e755862f8875ef7f1da9db8b1454dcea045

VeridiumID Cassandra

Haproxy

484ab56d012b5cc41933585219db0f7f

fc8e7821cfda50e8c457a887a786597926554024

VeridiumID Haproxy

SelfServicePortal

1544128dfd90ddd2c15757750bf8a2e5

dc335c0abb52e0d8be4bc908071f1842166ef8d5

VeridiumID Self Service Portal

Shibboleth

5f5771b05d92275a0a9f08d71d2d9540

73781fd4410a54804cf977ab6e72f379ed2f4ee4

VeridiumID Shibboleth Identity Provider

Tomcat

433cf1eccf542d4f794b60c40507455c

4f8e43408940f245e155f301d95a661bb7ec3602

VeridiumID Tomcat

Setupagent

33a72e9dce714dfb059bab9072d82b12

fb8f9f5dac4fc76830fb3bacb3b15efd0aeb035c

VeridiumID Setupagent

Freeradius

8ae9e178f5e250143d35efa80f4a9af4

53ae6a86de015063e8d5db8b76e0268aaacb1b64

VeridiumID FreeRadius

4F

84cd2bd62d614d7065160aaad84055b2

04880f272fde019843ab083f37cfb7e485071229

VeridiumID 4F biometric library

VFace

78b657040ef290a75d191ef8d246d381

2c790dde46f3ebd50e7f6923e36bc39f251047a7

VeridiumID VFace biometric library

Update procedure RPM

e6a1f8074020668b0729a3711e866ed7

d207fc99ac03abeb85108848d907440b9371a4cd

Update scripts

RHEL9 MD5 of each package:

Package URL

MD5

SHA1

Description

Package URL

MD5

SHA1

Description

WebsecAdmin

1345bf8837e53fe8da63ea4fec04a3d6

2525f53091f026c950a93d89f1c57d184b0e3d6f

VeridiumID Admin Dashboard

Migration

f4f1813fea6ba386edb123e28005a406

c6ba2100545c7af3d97984a601080867cccb9e44

VeridiumID migration tool

Websec

1616d9ea0e0072a58cbdeab90148027c

354e6edc4baabca9126570e615632586e40e91b8

VeridiumID Websec

AdService

eb90ae0339259318491a1253ed991794

e2d391428b0633246e0291c6a92d8e006e99b38c

VeridiumID Directory Service component

DMZ

5d21bde114b07371223f853e41c78775

f3f1e1486bd29824c60eb6db3fded38469b46033

VeridiumID DMZ service

Fido

fe9ee8fe6ed5df1f22fed83f89070a6c

4655f758bc05a80abadce15295f14ba9c84e6291

VeridiumID Fido service

OPA

f7300c9394322c04fdd2cab961c9241d

4b6516ede18d433ca98d71f186ba269b66dbc012

VeridiumID Open Policy Agent

Elasticsearch

2b2d99422bdc7c0774eeaa411dcf681c

439aa91ea38a06d5fc1c0ee088f5da2bec843f80

VeridiumID Elasticsearch

Kibana

794d7f925ceb2dd1f6513a3bd8734bad

77d4c6f66beae5fd0ebbea88de7cf93763c60974

VeridiumID Kibana

Zookeeper

41fee58b0f1121b29d36328e67d65be8

f18ed2b72a5b6f6e1ab983bf76faa4efceb35ccb

VeridiumID Zookeeper

Cassandra

66c8f031e550bbebcb62c7b07b040329

cd8528af14ee2212ce114961f75da8593300b575

VeridiumID Cassandra

Haproxy

70648ecd8382fd16553569c5ef4e2385

64ab1fb5b5b12d1cc8c5d860ae16c3ba8c726ef6

VeridiumID Haproxy

SelfServicePortal

ba4748d5e6e0cc69bcf80395f0f037b4

654dcaa57934cad872189c159d858292cb726eb1

VeridiumID Self Service Portal

Shibboleth

68ca16b00f4868248813c812d838b0fb

a9aaaa48dbb30f931d264c1ff4c93640a9cf49b9

VeridiumID Shibboleth Identity Provider

Tomcat

2ab6a558d4daed57a8f9fe3822fa2975

707c6d7b79a716f602d8d7993f68cf42ef6ade1b

VeridiumID Tomcat

Setupagent

5a772bdf34f3b5c55f34fbc7884e08e3

08234bdc40db493e59930c49aa80df12d079549c

VeridiumID Setupagent

Freeradius

863982f332199295a3cc0b9b61c1451e

f0eefcaa668b220e30bcaafed12009d83e3805e1

VeridiumID FreeRadius

4F

e6395a17460c007542b996466a4b42e3

dee1ac20dca8edd0c3e0a307605741b67294be74

VeridiumID 4F biometric library

VFace

e4b98058f53627b85aff9b59cab54f04

65572c181a1f3de9e92cde08ac2cbe6e297f37bd

VeridiumID VFace biometric library

Update procedure RPM

482e801b6689132e265cbe4703e58a50

eca3bde86a4985dc2e7037119f0e861335d93be3

Update scripts

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.