Skip to main content
Skip table of contents

Upgrade VeridiumID from 3.6/3.7/3.7.1 to v3.8.0

This document will provide a step by step procedure to upgrade to VeridiumID 3.8.0.

It is recommended to take a snapshot for the servers before update.

The procedure will provide information regarding both update methods:

  • using a configured YUM repository

  • using local packages

The update is having a small service interruption. The old version is not fully compatible with the new database schema version.

So after the cassandra migration (on first node), the QR will not be displayed in the not updated WEBAPP node. In order to minimize the impact, after cassandra migration, it should be restart all applications on not updated server, by running commands: systemctl restart ver_tomcat, systemctl restart ver_websecadmin, systemctl restart ver_selfservice.

WEBAPP node is a server where websecadmin is installed, PERSISTENCE node is a server where Cassandra is installed.

 

Summary:

1) Download packages

2) Pre-requirements

3) Start Update

4) Post update steps

5) Other references

 

1) Download packages

 

Package URL

MD5

SHA1

Description

Update Packages Archive RHEL8

51e8aac659afab61db1dae1dfc6b4b2e

97213b14e7fefdb6cb891c7fa97d0f6f8bdfaf3b

VeridiumID Update packages archive containing all RPMs, for local update procedure RHEL8

Update Packages Archive RHEL9

7ea061d8eea9a2069ab4dd4ae804e713

eae7374557b1af677ad7eda0f92aa80ec2f8b0b3

VeridiumID Update packages archive containing all RPMs, for local update procedure RHEL9

Download the package on the server and unzip it.

CODE
## download the package on each server; the below command can be used. Please fill in the proxy IP and username and password provided by Veridium.
## it is recommanded to execute these commands with the user that is going to do the installation.
## based on OS version, you have download the necessary package:
## check OS version, by running 
cat /etc/redhat-release
## RHEL8, Rocky8
export https_proxy=PROXY_IP:PROXY_PORT
wget --user NEXUS_USER --password NEXUS_PASSWORD https://veridium-repo.veridium-dev.com/repository/VeridiumUtils/Veridium-3.8.0-update/veridiumid-update-packages-rhel8-12.0.52.zip
## RHEL9, Rocky9
export https_proxy=PROXY_IP:PROXY_PORT
wget --user NEXUS_USER --password NEXUS_PASSWORD https://veridium-repo.veridium-dev.com/repository/VeridiumUtils/Veridium-3.8.0-update/veridiumid-update-packages-rhel9-12.0.52.zip

Other option is to upload the update package to local repository, based on the OS the client is using - RHEL8 or RHEL9.

2) Pre-requirements

2.1) (MANDATORY) User requirements

We recommend using any user with sudo rights or root directly.

Python 3 must be installed. To check if you have a working Python 3 version run the following command:

CODE
python3 --version

If Python 3 is not installed, please see section 5.1 - How to install python 3

3) Start Update

Please execute all commands as root or with a user that has sudo privileges.

3.1) Update using local packages

Execute below commands on all nodes, first on WEBAPP and later on PERSITENCE nodes. Please execute the update one by one servers, not in parallel.

CODE
TMP_DEST="/home/veridiumid/update380"
#### please choose the one that apply, based on your OS:
##RHEL8
unzip veridiumid-update-packages-rhel8-12.0.52.zip -d ${TMP_DEST}
##RHEL9
unzip veridiumid-update-packages-rhel9-12.0.52.zip -d ${TMP_DEST}

After this, update application:

CODE
TMP_DEST="/home/veridiumid/update380"
sudo yum localinstall -y ${TMP_DEST}/packages/veridiumid_update_procedure-12.0.52-20250513.x86_64.rpm
sudo python3 /etc/veridiumid/update-procedure/current/preUpdateSteps.py --version 12.0.52 --rpm-path ${TMP_DEST}/packages/
sudo python3 /etc/veridiumid/update-procedure/current/startUpdate.py --version 12.0.52 --rpm-path ${TMP_DEST}/packages/
sudo bash /etc/veridiumid/scripts/check_services.sh

 

3.2) Update using a YUM repository

Starting with version 3.7.2, it is used JAVA 17 version. Please install this package before the update.

CODE
## please check JAVA version
java --version
## PLEASE INSTALL JAVA 17 from local repositories, if not already installed; it should be OPENJDK distribution. Without this step the update will not be possible
sudo yum install java-17-openjdk -y
## Make sure that the old java version is still the default one, if not then configure it using the following command:
sudo update-alternatives --config java

Check if packages are visible in the repository. If the packages are not visible, please upload them into your repository, based on the OS you are using.

CODE
## check installed package
sudo yum list installed veridiumid_update_procedure
## check availability of the new package; if this package is not available, please fix the issue with the repository
sudo yum list available veridiumid_update_procedure-12.0.52-20250513

If the package is available, please execute below commands on all nodes, first on WEBAPP and later on PERSITENCE nodes. Please execute the update one by one servers, not in parallel.

CODE
sudo yum clean metadata
sudo yum install -y veridiumid_update_procedure-12.0.52
sudo python3 /etc/veridiumid/update-procedure/current/preUpdateSteps.py --version 12.0.52 --use-repo
sudo python3 /etc/veridiumid/update-procedure/current/startUpdate.py --version 12.0.52 --use-repo
sudo bash /etc/veridiumid/scripts/check_services.sh

 

4) Post update steps

4.1) MUST: This procedure will migrate all the data to Elasticsearch (devices, accounts) in order to have better reports.

CODE
##please run it on one PERSISTENCE node, regardless of how many datacenters.
sudo bash /opt/veridiumid/migration/bin/migrate_to_elk.sh

4.2) OPTIONAL: After updating all nodes, please update Cassandra from 4.0.9/4.1.4 to 5.0.2 on persistence nodes. Please execute the update one by one servers, not in parallel. This procedure might be with a downtime until executed on all nodes. If Cassandra was updated in a previous version, than no update is needed.

If update is done with local packages:

CODE
/opt/veridiumid/cassandra/bin/nodetool describecluster
## if the version is 4.0.9 or 4.1.4, than update should be executed; the proper version is 5.0.2
TMP_DEST="/home/veridiumid/update380"
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_cassandra.sh ${TMP_DEST}/packages/
##check status
sudo /opt/veridiumid/cassandra/bin/nodetool status
sudo /opt/veridiumid/cassandra/bin/nodetool describecluster

If update is done with YUM repository:

CODE
/opt/veridiumid/cassandra/bin/nodetool describecluster
## if the version is 4.0.9 or 4.1.4, than update should be executed; the proper version is 5.0.2
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_cassandra.sh
##check status and wait till it starts before going to next node
sudo /opt/veridiumid/cassandra/bin/nodetool status
sudo /opt/veridiumid/cassandra/bin/nodetool describecluster

4.3) OPTIONAL: If “Error message: [es/index] failed: [mapper_parsing_exception] failed to parse field [authenticationDeviceOsPatch] of type [date] in document with id“ error appears in bops.log, the bellow procedure should be applied - this might appear only if updating from version 3.6

CODE
index=veridium.sessions-$(date '+%Y-%m')
/opt/veridiumid/migration/bin/elk_ops.sh --reindex --index-name=${index} --dest-index=${index}-001

4.4) OPTIONAL: run this step, only if KAFKA is installed (this is a step that needs to be executed only by clients that have ILP product upgraded to version 2.7.6 installed)

Please run the following procedure, on All Persistence Nodes, in parallel, first in DC1 and after that in DC2. Before switching to second DC2, please test if in first DC uba is working fine.

CODE
## check if kafka is installed
systemctl is-enabled uba-kafka
## if it is enabled, please run these commands:
## fix for noexec for tmp
if [ -d /opt/veridiumid/uba/kafka/tmp ]; then sed -i '16i export _JAVA_OPTIONS="-Djava.io.tmpdir=/opt/veridiumid/uba/kafka/tmp"' /opt/veridiumid/uba/kafka/bin/kafka-topics.sh; fi
if [ -d /opt/veridiumid/uba/kafka/tmp ]; then sed -i '16i export _JAVA_OPTIONS="-Djava.io.tmpdir=/opt/veridiumid/uba/kafka/tmp"' /opt/veridiumid/uba/kafka/bin/kafka-storage.sh; fi
##
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/decoupleKafkaFromZk.sh
## after this, please restart all ILP services on webapp nodes
uba_stop
uba_start

4.5) OPTIONAL: create zookeeper cluster and update properties to allow it to run in read only mode (for updates older than 3.7.2)

If you don’t have ILP installed, then you can proceed with this step.

If ILP is installed, then you need to update ILP to version 2.7.6 and then execute step 4.4 before continuing with this step.

Before starting this configuration, make sure that you have connectivity on ports 2888 and 3888 between ALL persistence nodes.

To test the connectivity run the following commands:

on DC1:

nc -zv IPNODEDC2 2888

nc -zv IPNODEDC2 3888

In case of single Datacenter, please run the following procedure, on All Persistence Nodes, sequentially. This apply also single/multi node installation.

CODE
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/372/update_zookeeper_configuration.sh

In case of CDCR, run the following procedure, to create one big cluster, with nodes from both datacenters. Previous command should not be executed in case of CDCR.

CODE
## run this command on main/active datacenter on one node in webapp. This generates a file DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -g
## copy the DC1.tar.gz to all nodes - webapp and persistence in both datacenters.
## run this command on all persistance in primary datacenter - the script will create a large cluster containing the Zookeeper nodes in both datacenters
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -z -a ${ARCH_PATH}
## run this command on all persistance in secondary datacenter - the script will create a large cluster containing the Zookeeper nodes in both datacenters and remove data from second DC
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -z -s -a ${ARCH_PATH}
## run this command on a single node in one datacenter - Process the cassandra connectors to have the all IPs and Upload modified Zookeeper configuraion (can be done just once in the primary datacenter).
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -j -a ${ARCH_PATH}
## run this command on all webapp nodes in secondary datacenter - this is changing the salt and password taken from DC1 into DC2.
ARCH_PATH=/tmp/DC1.tar.gz
sudo bash /etc/veridiumid/scripts/veridiumid_cdcr.sh -p -r -a ${ARCH_PATH}

4.6) OPTIONAL: Update ELK stack (introduced in 3.8)

To update ElasticSearch Stack (ElasticSearch, Kibana, Filebeat) run the following command on ALL NODES starting with persistence nodes.

The script is updating first the number of shards from 0 to 1, so it might take around 5-10 minutes until the cluster become green. During the changing of shards number, the cluster might become yellow.

If update is done using local packages

CODE
#fix for a very specific case, when there is not enough space.
sed -i '452i app_path=$(dirname $(readlink /opt/veridiumid/elasticsearch));if [[ "${app_path}" == "" ]]; then error "Failed to get the correct application path. Application path value is empty."; fi' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh 
sed -i '552i app_path=$(dirname $(readlink /opt/veridiumid/kibana));if [[ "${app_path}" == "" ]]; then error "Failed to get the correct application path. Application path value is empty."; fi' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i '638i app_path=$(dirname $(readlink /opt/veridiumid/filebeat));if [[ "${app_path}" == "" ]]; then error "Failed to get the correct application path. Application path value is empty."; fi' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|mv ${ELASTIC_DIR}/data ${VERIDIUM_DIR}/elastic_data|rez=`mv ${ELASTIC_DIR}/data ${app_path}/elastic_data 2>\&1`;if [ $? -ne 0 ];then error "Failed to backup Elasticsearch data directory. Error message: ${rez}";fi|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|mv ${KIBANA_DIR}/data ${VERIDIUM_DIR}/kibana_data|rez=`mv ${KIBANA_DIR}/data ${app_path}/kibana_data 2>\&1`;if [ $? -ne 0 ];then error "Failed to backup Kibana data directory. Error message: ${rez}";fi|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|mv ${FILEBEAT_DIR}/bin/data ${VERIDIUM_DIR}/filebeat_data|rez=`mv ${FILEBEAT_DIR}/bin/data ${app_path}/filebeat_data 2>\&1`;if [ $? -ne 0 ];then error "Failed to backup Filebeat data directory. Error message: ${rez}";fi|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|${VERIDIUM_DIR}/elastic_data|${app_path}/elastic_data|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|${VERIDIUM_DIR}/filebeat_data|${app_path}/filebeat_data|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|${VERIDIUM_DIR}/kibana_data|${app_path}/kibana_data|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|configure_shard_allocation "enable"|#configure_shard_allocation "enable"|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i '406s/^/#/' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
##

TMP_DEST="/home/veridiumid/update380"
sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh ${TMP_DEST}/packages/
## check if version is now 8.17.3
sudo /opt/veridiumid/elasticsearch/bin/elasticsearch --version

## After all nodes are updated to version 8.17.3, enable shard allocation
sudo bash /opt/veridiumid/elasticsearch/bin/elasticsearch_ops.sh -x=PUT -p=/_cluster/settings -d="{\"transient\":{\"cluster.routing.allocation.enable\":\"all\"}}"

## This needs to be executed on one webapp node
pass=`grep elasticsearch.password /opt/veridiumid/kibana/config/kibana.yml | awk -F':' '{print $2}' | tr -d ' "' | tr -d ' '`
curl -u elastic:$pass -X POST http://localhost:5601/websecadmin/rest/kibana-proxy/api/data_views/data_view -H "Content-Type: application/json" -H "kbn-xsrf: true" -d '{"data_view":{"title": "veridium*","name": "veridium3","timeFieldName": "@timestamp"}}' > /dev/null 2>&1

## usefull commands: if, yellow check the cluster status and shards allocation problems.
#eops -x=GET -p=/_cluster/allocation/explain
## 
#eops -x=PUT -p="/veridium.*/_settings" -d="{\"index\":{\"number_of_replicas\":0}}"
## comment execution of check_index_replicas

If update is done using YUM repository

CODE
#fix for a very specific case, when there is not enough space.
sed -i '452i app_path=$(dirname $(readlink /opt/veridiumid/elasticsearch));if [[ "${app_path}" == "" ]]; then error "Failed to get the correct application path. Application path value is empty."; fi' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh 
sed -i '552i app_path=$(dirname $(readlink /opt/veridiumid/kibana));if [[ "${app_path}" == "" ]]; then error "Failed to get the correct application path. Application path value is empty."; fi' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i '638i app_path=$(dirname $(readlink /opt/veridiumid/filebeat));if [[ "${app_path}" == "" ]]; then error "Failed to get the correct application path. Application path value is empty."; fi' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|mv ${ELASTIC_DIR}/data ${VERIDIUM_DIR}/elastic_data|rez=`mv ${ELASTIC_DIR}/data ${app_path}/elastic_data 2>\&1`;if [ $? -ne 0 ];then error "Failed to backup Elasticsearch data directory. Error message: ${rez}";fi|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|mv ${KIBANA_DIR}/data ${VERIDIUM_DIR}/kibana_data|rez=`mv ${KIBANA_DIR}/data ${app_path}/kibana_data 2>\&1`;if [ $? -ne 0 ];then error "Failed to backup Kibana data directory. Error message: ${rez}";fi|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|mv ${FILEBEAT_DIR}/bin/data ${VERIDIUM_DIR}/filebeat_data|rez=`mv ${FILEBEAT_DIR}/bin/data ${app_path}/filebeat_data 2>\&1`;if [ $? -ne 0 ];then error "Failed to backup Filebeat data directory. Error message: ${rez}";fi|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|${VERIDIUM_DIR}/elastic_data|${app_path}/elastic_data|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|${VERIDIUM_DIR}/filebeat_data|${app_path}/filebeat_data|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|${VERIDIUM_DIR}/kibana_data|${app_path}/kibana_data|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i 's|configure_shard_allocation "enable"|#configure_shard_allocation "enable"|g' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
sed -i '406s/^/#/' /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
##

sudo bash /etc/veridiumid/update-procedure/current/resources/scripts/380/update_elk_stack.sh
## check if version is now 8.17.3
sudo /opt/veridiumid/elasticsearch/bin/elasticsearch --version

## After all nodes are updated to version 8.17.3, enable shard allocation
sudo bash /opt/veridiumid/elasticsearch/bin/elasticsearch_ops.sh -x=PUT -p=/_cluster/settings -d="{\"transient\":{\"cluster.routing.allocation.enable\":\"all\"}}"

## This needs to be executed on one webapp node
pass=`grep elasticsearch.password /opt/veridiumid/kibana/config/kibana.yml | awk -F':' '{print $2}' | tr -d ' "' | tr -d ' '`
curl -u elastic:$pass -X POST http://localhost:5601/websecadmin/rest/kibana-proxy/api/data_views/data_view -H "Content-Type: application/json" -H "kbn-xsrf: true" -d '{"data_view":{"title": "veridium*","name": "veridium3","timeFieldName": "@timestamp"}}' > /dev/null 2>&1

## usefull commands: if, yellow check the cluster status and shards allocation problems.
#eops -x=GET -p=/_cluster/allocation/explain
## 
#eops -x=PUT -p="/veridium.*/_settings" -d="{\"index\":{\"number_of_replicas\":0}}"
## comment execution of check_index_replicas

5) Other references.

5.1) How to install python 3

In order to run the update procedure all nodes must have Python 3 installed .

To check if the VeridiumID Python 3 package (this is optional) is present use the following command as root:

CODE
## on RHEL7/Centos7 it should be used python 3.7
python3 --version
##Python 3.7.8
yum -y install python3.7
## on RHEL8/RHEL9 it should be used python3.9
sudo yum -y install python39 python39-pip
##Python 3.9.18

 

Veridium REPO LINKS:

 

RHEL8 MD5 of each package:

Package URL

MD5

SHA1

Description

WebsecAdmin

106c6303cd15818b1a6f19e097a61a03

2b63129b8b853e35e9eebafb164fee36d1b17ed3

VeridiumID Admin Dashboard

Migration

05fb61b7c9885033e5dc95a2452bea48

7aea02f11a4f1b5cbea102fb1e260d9d8599c726

VeridiumID migration tool

Websec

65616902292fa5e5f2532347c6548785

0b445b1b004acfe90371131cbfa4fca6f7d5f0ce

VeridiumID Websec

AdService

ce2cab7c2c3a7635ef2fc7ef888d8c43

ba84582bbad6cd84649a11893c5c48d6ea210d55

VeridiumID Directory Service component

DMZ

0f0dcb09d5ddf933f25bcb81b3798788

d0efd51ccb9d66a4243698c82ee5ee5cfe0061ca

VeridiumID DMZ service

Fido

4630db4a7f238769d90c0d2746d895bc

1f0fe90a12ff7a3b0cc80956ae30a1a0a5674484

VeridiumID Fido service

OPA

81d3bd869795c7d044049f9b8606d118

a0d8f639f85ff7de90587a643702bf292d3ecebc

VeridiumID Open Policy Agent

Elasticsearch

e726ee85f0a682c3fe14b94b1b2d4360

8750499f2a214394c91552ed0e126aeb37a38fc1

VeridiumID Elasticsearch

Kibana

f7094cf97fc988cc21b75ff92a91ba2f

be574f9282947c5cba341ded8cbd6a11b9255a80

VeridiumID Kibana

Zookeeper

1335cdf334899497962cda9c0ef61bbb

132831f0875cd30f7aee5786cc6668ab52b71b37

VeridiumID Zookeeper

Cassandra

b86d42fd892b542a6d1317c251061ee3

e06bcd82c845ed0e7f915b5ba5a46c7968e25ede

VeridiumID Cassandra

Haproxy

dfb756f455a1fee1a0a33fe04d93f337

f017764a107c03bb9ff2145436f99e419a1f343e

VeridiumID Haproxy

SelfServicePortal

f33dc5d67642002c98f5be5eb199b52f

3010a6e5bd60f41e7a4798e3f65b82bd2c79ea3e

VeridiumID Self Service Portal

Shibboleth

7e82a002f3996d96f41bf594caed603d

75771924b24735cbe83c247718b955bcf44cd095

VeridiumID Shibboleth Identity Provider

Tomcat

64b5e029023e960c6fa9959b0c2698aa

3851afd46039b14adada511e54c05067661c5263

VeridiumID Tomcat

Freeradius

67cd3d10e9e97d8920822e60754e83b8

132d0b9e05fde0adac34f608a151b42a942e7ea0

VeridiumID FreeRadius

4F

9668bee9f5d8c3a47ebe5878b45673dc

83c813e471d1a8d8e81d90a7b47e01442688a58f

VeridiumID 4F biometric library

VFace

bcded0cdfac22abb359d2f32ae2a315e

6e1b1d3259d7facb66619d959cc641cb6d9216af

VeridiumID VFace biometric library

Update procedure RPM

89bef8261bdbdab77b72e9c6e67f2464

e69c1b4f950e1e2328c33cefb6e198152f318995

Update scripts

RHEL9 MD5 of each package:

Package URL

MD5

SHA1

Description

Package URL

MD5

SHA1

Description

WebsecAdmin

e2340ac0353de8e0b9fed15bde29d297

49a4c43029e1b2ccc39aedb8e86fe4747e843eae

VeridiumID Admin Dashboard

Migration

7621aae1f65895d30ed7a529fa49d469

a1c9dfae0832d79eb511410f08beae511e65fd35

VeridiumID migration tool

Websec

58dfed2153552a448792300444667e34

2f6f96c9b223497ae457972c0ee4075467cdeea8

VeridiumID Websec

AdService

ed0eeee01feef07b7b59835c09fe7c95

5375b7942994fda8ccd66da67b93f27691c311b1

VeridiumID Directory Service component

DMZ

a0426eb991a6ce7f3fe088fd1312d34b

4115e7d98855b8c2798b0e50a24911bc673e86c2

VeridiumID DMZ service

Fido

a160fac017242c4cebf2b0f9556293b8

ed2ffc7f0bd98b82d22437d30f46475e2d291407

VeridiumID Fido service

OPA

d18f472e8e537d98fd391008a9035b22

69952ce4065ecbafaeed01e35b40c673932e9b05

VeridiumID Open Policy Agent

Elasticsearch

b0c2b1c27899683776db3a6325fed142

22ef0e94c7128b36932a69c0a75bfd6a4ec0704c

VeridiumID Elasticsearch

Kibana

ee00fefbb70b93adfe4cc9ac6c0362c8

6bd2155900baf0066ae70e337a7f042dfd8487e4

VeridiumID Kibana

Zookeeper

b49f3cc563d201e5c7eefd4b11674826

d269d8dd0ee6b791273961be061cf91b4404ae55

VeridiumID Zookeeper

Cassandra

be57da01492a2659927d1298a2c4e82b

504647333764e05b586fe823bbd9bc4ec6a14770

VeridiumID Cassandra

Haproxy

11b90a8d4ed9053725cdcdb344ab0361

ea7fb63da7f2a8ec9c44e949b7342786a861cc25

VeridiumID Haproxy

SelfServicePortal

e40e748e94fd031d386557035fa78b14

49f11dece702afc6077ee647a6330c882690a6bd

VeridiumID Self Service Portal

Shibboleth

1fd8869f48db54190e1fba100382a424

c2a079589a211737fa6fcbc68d7a064e96babba1

VeridiumID Shibboleth Identity Provider

Tomcat

8f3542e4ed87983ec438b5c170cef328

8a5538056e0c157128df98ea907259066e67b0fe

VeridiumID Tomcat

Freeradius

2b9d6fd146405f7c79a617573da52cf0

b01e8abb542695796cddc9c5a5d925892b5d7af5

VeridiumID FreeRadius

4F

99a87b9495e48c3952bafbd8378c1866

e34052ecf352154c38c27fc9d7c8f6f86508387b

VeridiumID 4F biometric library

VFace

778aecdf21d3b6973c18d4b2e78e2c81

15c441cc340e759d6cae13f244c85336f38eb7a1

VeridiumID VFace biometric library

Update procedure RPM

1526e6735f61e201a45efed6bf40afe1

edee2fbcd88a555d5bbf6a1213ad8b4844eeb8fb

Update scripts

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.