KB Article #164907

How to break a SecureTransport cluster before ugprading it to a newer version

Problem

Due to the recent announcement for SecureTransport 4.9.x reaching its End Of Support we highly recommended for all Axway customers to upgrade their production environment(s) to one of the latest versions of SecureTransport like 5.1 Service Pack 3 or version 5.2. However, in case an upgrade of a clustered environment is needed the cluster itself must first be brokenĀ 

Resolution

BREAKING OF THE CLUSTER:
========================

  1. before you start, make a manual sync from primary node 'Admin GUI->Remote' button (this will assure that data between all nodes is up to date and synchronized)
  2. on all nodes - edit $FILEDRIVEHOME/lib/admin/config/servers file and comment out the host entries for all primary and secondary nodes (with '##')
  3. on all nodes - navigate to $FILEDRIVEHOME/conf/configuration.xml, locate the 'Cluster' element and change the value from mode="passive_legacy/passive/active" to mode="disabled" and save the files
  4. stop one node using $FILEDRIVEHOME/bin/stop_all and upgrade it. Note: once a node is upgraded it will start automatically and it will come up as a standalone server with no communication with the other nodes
  5. stop another node(s) using $FILEDRIVEHOME/bin/stop_all and upgrade it. Note: once a node is upgraded it will start automatically and it will come up as a standalone server with no communication with the other nodes
  6. once all nodes are successfully upgraded proceed with rejoining of the cluster (see the steps bellow)


JOINING BACK THE CLUSTER:
=========================

  1. assuming both primary and secondary nodes are successfully upgraded - on both primary and secondary nodes - edit $FILEDRIVEHOME/lib/admin/config/servers file and remove the comments in front of the host entries for all primary and secondary nodes (remove the '##')
  2. on both primary and secondary nodes navigate to 'Admin GUI->Operations->Server Configuration' and locate the 'Cluster.mode' parameter and set it according to your choice of setup - 'passive_legacy/active/passive' and save the value - NOTE: this step does not apply to Edge servers
  3. stop both primary and secondary nodes - $FILEDRIVEHOME/bin/stop_all
  4. - on primary node - check weather the 'sentinel_primary' file exists within $FILEDRIVEHOME/var/tmp and if its missing, create it manually (empty file)
  5. start primary node services and wait for 3-5 minutes - $FILEDRIVEHOME/bin/start_all
  6. start secondary node services and wait for 3-5 minutes - $FILEDRIVEHOME/bin/start_all
  7. open primary server 'Admin GUI->Cluster Management' page and click on the 'Synchronize All' button to synchronize the cluster
  8. done.