KB Article #179295

Types of storages used with ST and their PROs and CONs

This article will explain some of the basics of the different storage types which can be used with ST and their respective benefits or drawbacks. It will also offer suggestions how to improve the performance, especially when it is used with high volume traffic or post-processing by ST.


Table of contents

  1. Introduction
  2. NFS
  3. OCFS2
  4. GPFS
  5. Known issue when using NetAPP and PGP decryption
  6. General recommendations when using a shared storage device
  7. Fine tuning configuration options in ST, related to shared storage devices
  8. General advisory

Introduction

In the old days, a disk had to be "owned" by a particular server, since it had to be physically attached to it. This is now known as DAS - Directly Attached Storage. The disk had to be formatted with a filesystem to be used by said computer.


NFS, Network File System, takes the local disk resources of a computer and shares them over the network. NFS behaves like a filesystem, and the basic storage unit is the file. A dedicated fileserver accessed via NFS (or CIFS) is usually referred as NAS, Network Attached Storage.


With the introduction of SANs (Storage Area Network), the disk now is not necessarily local to a particular computer, but it can be shared with multiple machines, via FibreChannel, iSCSI, FCoE, etc. However, a server using the shared disk will still assume the disk is local, and a normal filesystem will behave as if the disk is a DAS. This is not an issue if the disk is shared only for, say, High Availability, i.e. one server of a cluster uses the SAN disk, and the other server will use it only if the primary goes down. All sorts of bad things can happen, however, if there are issues with the HA setup, for example a Split-Brain cluster, which may corrupt all your data.


If instead you want to have the SAN disk shared (like Oracle RAC), then your filesystem must be aware that the "disk" is shared by a cluster of computers, in order to correctly coordinate access and locking to the physical disk and keep the filesystem cache coherent among all members of the cluster.


NFS - Network File System

Network File Systems are also used by Oracle as a storage options. These file systems are stored on NAS (Network Attached Storage) devices/filers. Famous providers of NAS products are Netapp and EMC.


For the mounting (client) machine, the NFS mounts are seen as remote devices, and not local to the machine as with a SAN. Because NFS is stored on a remote device on the Network, throughput may not be as good as with a SAN or local device, so systems with a high number of transactions may not benefit from this. Oracle has a list of certified NFS configurations.

PROs of an NFS storage

  • Standard
  • Cross-platform
  • Easy to implement
  • The cost of NAS products is generally cheaper

CONs of an NFS storage

  • Poor performance
  • Single point of failure (single locking manager, even in HA)
  • I/O throughput can be slower, can be affected by additional network traffic if not isolated

Recommendations when using NFS storage with ST

We usually recommend putting a specific set of configuration options to the NFS and CIFS shares to increase the consistency and stability of I/O operations performed by SecureTransport on the storage. The option in question is:


sync


What is does: If the sync option is specified on a mount point, any system call that writes data to files on that mount point causes that data to be flushed to the server before the system call returns control to the user space. In other words - less likelihood of corruption, less likelihood of overwrites by other users.


This provides greater data cache coherence among clients, but at a significant performance cost.


The recommended option decreases the overall performance of NFS/CIFS in favor of consistency between the client(s) and the NFS/CIFS server. As such, it should be considered as a tradeoff.


OCFS2

Developed by Oracle, it is used for RAC and is production ready.

PROs of an OCFS2 storage

  • The file system was designed with Oracle clustering in mind
  • Eliminates the need to use RAW devices or other expensive clustered file systems
  • Very fast with large and small datafiles on different nodes with two types of performance models (mail, datafile)
  • With the advent of OCFS2, binaries, scripts, and configuration files (shared Oracle home) can be stored in the file system
  • Making the management of RAC easier
  • Works on a physical and virtual

CONs of an OCFS2 storage

  • Supported only through contract with Oracle or SLES
  • No quota support
  • No on-line resize
  • Except for Linux, all of the clustered filesystem options are provided by a third-party vendor

Considerations when using OCFS2 and Axway Appliances (or SLES)

Before SLES 11.3 the way of mounting and using OCFS2 volumes was different than the method available in newer versions of the SLES operating system. The “old” way is no longer supported by Novell (SLES vendor) and even though it is described in the ST documentation before version 5.3.3, it should not be used with Axway Appliances.


All Axway Appliances run either SLES 11.3 or SLES 11.4, thus being affected by the above change in the OS and must use the “new” method of configuring and using OCFS2 volumes via the SUSE Linux Enterprise High Availability Extension (SLEHA).


The SLEHA configuration is described in the following Axway KB article: KB 177014.


The SLEHA gives the needed level of flexibility to adjust the timeouts and node fencing for each environment, which were lacking before. This makes the OCFS2 cluster much more stable compared to the "old way" and it’s easier to maintain once the initial SLEHA setup is completed.

Additional resources



GPFS

The GPFS is a high-performance clustered file system that can be deployed in shared-disk or shared-nothing distributed parallel modes. GPFS provides concurrent high-speed file access to the applications executing on multiple nodes (The inode is a data structure in a Unix-style file system which describes a filesystem object such as a file or a directory. Each inode stores the attributes and disk block location(s) of the object's data) of clusters which can be used with AIX 5L, Linux, and Windows platforms.


At its core, GPFS is a parallel disk file system. The parallel nature of GPFS guarantees that the entire file system is available to all nodes within a defined scope and the file system’s services can be safely applied to the same file system on multiple nodes simultaneously.


In addition to its parallel features, GPFS supports high availability and fault tolerance. The high availability nature of GPFS means that the file system will remain accessible to nodes even when a node in the file system dies. The fault tolerant nature of GPFS means that file data will not be lost even if some disk in the file system fails.


GPFS FPO is specifically designed for Hadoop environments.

PROs of GPFS storage

  • High Performance File System
  • Highly scalable file system
  • Administration Simplified
  • General Parallel File System
  • No single point of failure because of distribution of meta data in active/active configuration
  • POSIX file system
  • Policy based data ingestion
  • Enterprise class storage solution

CONs of GPFS storage

  • command line only
  • lots of knobs/levers to pull - can be intimidating



Known issue when using NetAPP and PGP decryption in ST

When using ST with a NetApp device, and doing PGP decryption in ST, an unknown error might be thrown by ST with an exit code -64. The following observations have been made:


NetApp Version Mode PGP Issue
OnTap 8.1.4 p7 7 Mode Present
OnTap 8.2.1 Cluster Mode Not present


ST returns an Exit Code -64 when it tries to write to the NetApp "OnTap 8.1.4 p7 - 7 Mode" device. When the used NetApp device is "OnTap 8.2.1 - Cluster Mode", the operations complete successfully.


General recommendations when using a shared storage device with ST

  • Use a high performing shared storage device with a corresponding high-performing clustered file system. SAN with OCFS2 or GPFS is recommended.
  • Setup the ST’s Advanced Routing sandbox folder (AdvancedRouting.sandboxFolderLocation) to be local to the TM cluster node rather than hosted on the shared storage device.
  • Consider distributing the accounts’ home folders to different physical storages when the shared storage is found to be regularly overloaded.



Fine tuning configuration options in ST, related to shared storage devices

General performance-related configuration options in ST

The following performance improvements have been made in ST 5.3.3 patch 14:


  • Reduced disk I/O when checking if folders are shared folders
  • Reduced disk I/O when resolving symbolic links
  • Reduced database I/O when finalizing transfers
  • Optimized the Advanced Routing’s Decompress step
  • Overall reduction of disk I/O when working with STFS
  • Better disk I/O control for the TM. The control is achieved by the following configuration options:
    • TransactionManager.fileIOBufferSizeInKB
    • TransactionManager.syncFileToDiskEveryKB
  • Event distribution in Large Enterprise Cluster has been improved


A comprehensive list of tuning options for ST, not only related to storage, but in general, is available in the following Axway Knowledgebase article: KB 178443

Advanced Routing-related configuration options

New configuration options were introduced to enable fine tuning of resource allocation separately for events initiated by transfers and Advanced Routing post-processing events. The following options controls the new thread pool used for processing only Advanced Routing events:


  • ThreadPools.AdvancedRouting.maxThreads - Default value: 128
  • ThreadPools.AdvancedRouting.minThreads - Default value: 16
  • ThreadPools.AdvancedRouting.IdleTime - Default value: 60


A new server configuration option named AdvancedRouting.sandboxFolderLocation has been introduced to enable creation of the sandbox folder locally on each processing node outside the users' home folders. This reduces the network file copy to once at the beginning of the route and once at the end in cluster environment. The default value is empty which means that the Advanced Routing will create its sandbox folder under the accounts’ home folders - the default and legacy behavior.


General advisory

Axway always recommend implementing the latest SecureTransport version to minimize the possibility of problems occurring along with full benefit of the fixes in this release in this regard. It has better stability and significant performance improvements compared to the previous SecureTransport versions, especially those with I/O related operations.


You can check the ST Capacity Planning Guide for your version of the application, which provides information and general guidance that you can use to plan your ST production deployment:


ST Capacity Planning Guides