Horizontally Scaling for User Volume with Distributed Caching
When vertical scaling alone proves insufficient for scaling your InterSystems IRIS data platform to meet your workload’s requirements, you can consider distributed caching, an architecturally straightforward, application-transparent, low-cost approach to horizontal scaling.
Overview of Distributed Caching
The InterSystems IRIS distributed caching architecture scales horizontally for user volume by distributing both application logic and caching across a tier of application servers sitting in front of a data server, enabling partitioning of users across this tier. Each application server handles user requests and maintains its own database cache, which is automatically kept in sync with the data server, while the data server handles all data storage and management. Interrupted connections between application servers and data server are automatically recovered or reset, depending on the length of the outage.
Distributed caching allows each application server to maintain its own, independent working set of the data, which avoids the expensive necessity of having enough memory to contain the entire working set on a single server and lets you add inexpensive application servers to handle more users. Distributed caching can also help when an application is limited by available CPU capacity; again, capacity is increased by adding commodity application servers rather than obtaining an expensive processor for a single server.
This architecture is enabled by the use of the Enterprise Cache Protocol (ECP), a core component of InterSystems IRIS data platform, for communication between the application servers and the data server.
The distributed caching architecture and application server tier are entirely transparent to the user and to application code. You can easily convert an existing standalone InterSystems IRIS instance that is serving data into the data server of a cluster by adding application servers.
The following sections provide more details about distributed caching:
Distributed Caching Architecture
To better understand distributed caching architecture, review the following information about how data is stored and accessed by InterSystems IRIS:
InterSystems IRIS stores data in a file in the local operating system called a database. An InterSystems IRIS instance may (and usually does) have multiple databases.
InterSystems IRIS applications access data by means of a namespace, which provides a logical view of the data stored in one or more databases. A InterSystems IRIS instance may (and usually does) have multiple namespaces.
Each InterSystems IRIS instance maintains a database cache — a local shared memory buffer used to cache data retrieved from the databases, so that repeated instances of the same query can retrieve results from memory rather than storage, providing a very significant performance benefit.
The architecture of a distributed cache cluster is conceptually simple, using these elements in the following manner:
An InterSystems IRIS instance becomes an application server by adding another instance as a remote server, and then adding any or all of its databases as remote databases. This makes the second instance a data server for the first instance.
Local namespaces on the application server are mapped to remote databases on the data server in the same way they are mapped to local databases. The difference between local and remote databases is entirely transparent to an application querying a namespace on the application server.
The application server maintains its own database cache in the same manner as it would if using only local databases. ECP efficiently shares data, locks, and executable code among multiple InterSystems IRIS instances, as well as synchronizing the application server caches with the data server.
In practice, a distributed cache cluster of multiple application servers and a data server works as follows:
The data server continues to store, update, and serve the data. The data server also synchronizes and maintains the coherency of the application servers’ caches to ensure that users do not receive or keep stale data, and manages locks across the cluster.
Each query against the data is made in a namespace on one of the various application servers, each of which uses its own individual database cache to cache the results it receives; as a result, the total set of cached data is distributed across these individual caches. If there are multiple data servers, the application server automatically connects to the one storing the requested data. Each application server also monitors its data server connections and, if a connection is interrupted, attempts to recover it.
User requests can be distributed round-robin across the application servers by a load balancer, but the most effective approach takes full advantage of distributed caching by directing users with similar requests to the same application server, increasing cache efficiency. For example, a health care application might group clinical users who run one set of queries on one application server and front-desk staff running a different set on another. If the cluster handles multiple applications, each application's users can be directed to a separate application server. The illustrations that follow compare a single InterSystems IRIS instance to a cluster in which user connections are distributed in this manner. (Load balancing user requests can even be detrimental in some circumstances; for more information see Evaluate the Effects of Load Balancing User Requests.)
The number of application servers in a cluster can be increased (or reduced) without requiring other reconfiguration of the cluster or operational changes, so you can easily scale as user volume increases.
In a distributed cache cluster, the data server is responsible for the following:
Storing data in its local databases.
Synchronizing the application server database caches with the databases so the application servers do not see stale data.
Managing the distribution of locks across the network.
Monitoring the status of all application servers connections and taking action if a connection is interrupted for a specific amount of time (see ECP Recovery).
In a distributed cache cluster, each application server is responsible for the following:
Establishing connections to a specific data server whenever an application requests data that is stored on that server.
Maintaining, in its cache, data retrieved across the network.
Monitoring the status of all connections to the data server and taking action if a connection is interrupted for a specific amount of time (see ECP Recovery).
A distributed cache cluster can include more than one data server (although this is uncommon). An InterSystems IRIS instance can simultaneously act as both a data server and an application server, but cannot act as a data server for the data it receives as an application server.
ECP supports the distributed cache architecture by providing the following features:
Automatic, fail-safe operation. Once configured, ECP automatically establishes and maintains connections between application servers and data servers and attempts to recover from any disconnections (planned or unplanned) between application server and data server instances (see ECP Recovery). ECP can also preserve the state of a running application across a failover of the data server (see Distributed Caching and High Availability).
Along with keeping data available to applications, these features make a distributed cache cluster easier to manage; for example, it is possible to temporarily take a data server offline or fail over as part of planned maintenance without having to perform any operations on the application server instances.
Heterogeneous networking. InterSystems IRIS systems in a distributed cache cluster can run on different hardware and operating system platforms. ECP automatically manages any required data format conversions.
A robust transport layer based on TCP/IP. ECP uses the standard TCP/IP protocol for data transport, making it easy to configure and maintain.
Efficient use of network bandwidth. ECP takes full advantage of high-performance networking infrastructures.
ECP is designed to automatically recover from interruptions in connectivity between an application server and the data server. In the event of such an interruption, ECP executes a recovery protocol that differs depending on the nature of the failure and on the configured timeout intervals. The result is that the connection is either recovered, allowing the application processes to continue as though nothing had happened, or reset, forcing transaction rollback and rebuilding of the application processes.
For more information on ECP connections, see Monitoring Distributed Applications; for more information on ECP recovery, see ECP Recovery Protocol and ECP Recovery Process, Guarantees, and Limitations.
Distributed Caching and High Availability
While ECP recovery handles interrupted application server connections to the data server, the application servers in a distributed cache cluster are also designed to preserve the state of the running application across a failover of the data server. Depending on the nature of the application activity and the failover mechanism, some users may experience a pause until failover completes, but can then continue operating without interrupting their workflow.
Data servers can be mirrored for high availability in the same way as a stand-alone InterSystems IRIS instance, and application servers can be set to automatically redirect connections to the backup in the event of failover. (It is not necessary or even possible to mirror an application server, as it does not store any data.) For detailed information about the use of mirroring in a distributed cache cluster, see Configuring ECP Connections to a Mirror the “Configuring Mirroring” chapter in the High Availability Guide.
The other failover strategies detailed in the “Failover Strategies for High Availability” chapter of the High Availability Guide can also be used in a distributed cache cluster. Regardless of the failover strategy employed for the data server, the application servers reconnect and recover their states following a failover, allowing application processing to continue where it left off prior to the failure.
Deploying a Distributed Cache Cluster
An InterSystems IRIS distributed cache cluster consists of a data server instance providing data to one or more application servers instances, which in turn provide it to the application. InterSystems IRIS data platform offers several methods for automated deployment of distributed cache clusters. This instructions in this section are for manually deploying the cluster using the Management Portal. Information about securing the cluster after deployment is provided in Distributed Cache Cluster Security.
In general, the InterSystems IRIS instances in a distributed cache cluster can be of different versions, as long as none of the application servers are of a later version than the data server. For important requirements and limitations regarding version compatibility, see ECP InteroperabilityOpens in a new tab in InterSystems Supported Platforms.
While the data server and application server hosts can be of different operating systems and/or endianness, all InterSystems IRIS instances in a distributed cache cluster must use the same locale. For information about configuring locales, see Using the NLS Settings Page of the Management Portal in the System Administration Guide.
For an important discussion of performance planning, including memory management and scaling, CPU sizing and scaling, and other considerations, see System Resource Planning and Management.
The most typical distributed cache cluster configuration involves one InterSystems IRIS instance per host, and one cluster role per instance — that is, either data server or application server. When using one of the automated deployment methods described in the next section, this configuration is the only option. The provided procedure for using the Management Portal assumes this configuration as well.
HealthShare Health Connect does not support distributed caching.
Automated Deployment Methods for Clusters
In addition to the manual procedure outlined in this section, InterSystems IRIS Data platform provides several methods for automated deployment of distributed cache clusters that are fully operational following deployment.
Deploy a Distributed Cache Cluster Using InterSystems Cloud Manager (ICM)
With InterSystems Cloud Manager (ICM), you can automatically deploy any InterSystems IRIS data platform configuration, including distributed cache clusters with mirrored or unmirrored data server and any number of application servers, as well as web servers, a load balancer, and an arbiter if needed. By combining plain text declarative configuration files, a simple command line interface, and InterSystems IRIS deployment in containers, ICM provides you with a simple, intuitive way to provision cloud or virtual infrastructure and deploy the desired InterSystems IRIS architecture on that infrastructure, along with other services. ICM can also install from an InterSystems IRIS installation kit.
For complete ICM documentation, including details about deploying a distributed cache cluster in a public or private cloud or on existing infrastructure, including hardware, see the InterSystems Cloud Manager Guide. For a brief introduction to ICM that includes a hands-on exploration of deploying a sharded cluster, see First Look: InterSystems Cloud Manager.
Deploy a Distributed Cache Cluster Using the InterSystems Kubernetes Operator (IKO)
KubernetesOpens in a new tab is an open-source orchestration engine for automating deployment, scaling, and management of containerized workloads and services. You define the containerized services you want to deploy and the policies you want them to be governed by; Kubernetes transparently provides the needed resources in the most efficient way possible, repairs or restores the deployment when it deviates from spec, and scales automatically or on demand. The InterSystems Kubernetes Operator (IKO) extends the Kubernetes API with the IrisCluster custom resource, which can be deployed as an InterSystems IRIS sharded cluster, distributed cache cluster, or standalone instance, all optionally mirrored, on any Kubernetes platform.
The IKO isn’t required to deploy InterSystems IRIS under Kubernetes, but it greatly simplifies the process and adds InterSystems IRIS-specific cluster management capabilities to Kubernetes, enabling tasks like adding nodes to a cluster, which you would otherwise have to do manually by interacting directly with the instances.
For more information on using the IKO, see Using the InterSystems Kubernetes OperatorOpens in a new tab.
Deploy a Distributed Cache Cluster Using Configuration Merge
The configuration merge feature, available on Linux and UNIX® systems, lets you vary the configurations of InterSystems IRIS containers deployed from the same image, or local instances installed from the same kit, by simply applying the desired declarative configuration merge file to each instance in the deployment.
This merge file, which can also be applied when restarting an existing instance, updates an instance’s configuration parameter file (CPF), which contains most of its configuration settings; these settings are read from the CPF at every startup, including the first one after an instance is deployed. When you apply configuration merge during deployment, you in effect replace the default CPF provided with the instance with your own updated version.
Using configuration merge, you can deploy a distributed cache cluster by calling separate merge files for the data server and the application servers, deploying them sequentially.
ICM, described in the preceding section, incorporates the configuration merge feature.
For information about using configuration merge in general and to deploy a distributed cache cluster in particular, see Automating Configuration of InterSystems IRIS with Configuration MergeOpens in a new tab.
Deploy the Cluster Using the Management Portal
When deploying manually, you must first identify or provision the infrastructure that will host the cluster, then identify or deploy InterSystems IRIS instances on the hosts, and finally configure the deployed instances as a cluster. Once you have deployed or identified the InterSystems IRIS instances you intend to include, and arranged network access of sufficient bandwidth among their hosts, configuring a distributed cache cluster using the Management Portal involves the following steps:
You perform these steps on the ECP Settings page of the Management Portal (System Administration > Configuration > Connectivity > ECP Settings).
Preparing the Data Server
An InterSystems IRIS instance cannot actually operate as the data server in a distributed cache cluster until it is configured as such on the application servers. The procedure for preparing the instance to be a data server, however, includes one required action and two optional actions.
To prepare an instance to be a data server, navigate to the ECP Settings page by selecting System Administration on the Management Portal home page, then Configuration, then Connectivity, then ECP Settings, then do the following:
In the This System as an ECP Data Server box on the right, enable the ECP service by clicking the Enable link for the service. This opens an Edit Service dialog for %Service_ECP; select Service Enabled and click Save to enable the service. (If the service is already enabled, as indicated by the presence of a Disable link in the box, go on to the next step.)Note:
For a detailed explanation of InterSystems services, see Services.
If you want multiple application servers to be able to connect simultaneously to the data server, in the This System as an ECP Data Server box, change the Maximum number of application servers setting to the number of application servers you want to configure, then click Save and restart the instance. (If the number of simultaneous application server connections becomes greater than the number you enter for this setting, the data server instance automatically restarts.)Note:
The maximum number of application servers can also be set using the MaxServerConnOpens in a new tab setting in the configuration parameter file (CPF), including in a configuration merge fileOpens in a new tab on UNIX® and Linux platforms.
The Time interval for Troubled state settings determines one of three timeouts used manage recovery of interrupted connections between application servers and the data server; leave it at the default of 60 until you have some data about the cluster’s operation over time. For more information on the ECP recovery timeouts, see ECP Recovery Protocol.
To enable the use of TLS to secure connections from application servers, click the Set up SSL/TLS ‘%ECPServer’ link to create an ECP TLS configuration for the data server, then a ECP SSL/TLS support setting, as follows:
Required — An application server can connect only if Use SSL/TLS is selected for this data server.
Enabled — An application server can connect regardless of whether Use SSL/TLS is selected for this data server.
Disabled — An application server cannot connect if Use SSL/TLS is selected (default) for this data server.
As described in Distributed Cache Cluster Security, TLS is one of several options for securing ECP communications. However, enabling TLS may have a significant negative impact on performance. When a cluster’s application servers and data server are located in the same data center, which provides optimal performance, the physical security of the data center alone may provide sufficient security for the cluster.
For important information on enabling and using TLS in a distributed cache cluster, including authorization of secured application server connections on the data server, see Securing Application Server Connections to the Data Server with TLS.
ECP uses some of the database cache on the data server to store various control structures; you may need to increase the size of the database cache or caches to accommodate this. For more information, see Increase Data Server Database Caches for ECP Control Structures.
The data server is now ready to accept connections from valid application servers.
Configuring an Application Server
Configuring an InterSystems IRIS instance as an application server in a distributed cache cluster involves two steps:
Adding the data server instance as a data server on the application server instance.
Adding the desired databases on the data server as remote databases on the application server.
To add the data server to the application server, do the following:
As described for the data server in Preparing the Data Server, navigate to the ECP Settings page; leave the settings on the This System as an ECP Application Server side set to the defaults.
If the ECP SSL/TLS support setting for the data server you are adding is Enabled or Required, click the Set up SSL/TLS ‘%ECPClient’ link to create an ECP TLS configuration for the application server. (You can also do this in the ECP Data Server dialog in a later step.) For more information, see the Use SSL/TLS setting in the next step.
Click Data Servers to display the ECP Data Servers page and click Add Server. In the ECP Data Server dialog, enter the following information for the data server:
Server Name — A descriptive name identifying the data server. (This name is limited to 64 characters.)
Host DNS Name or IP Address — Specify the DNS name of the data server’s host or its IP address (in dotted-decimal format or, if IPv6 is enabled, in colon-separated format). If you use the DNS name, it resolves to an actual IP address each time the application server initiates a connection to that data server host. For more information, see the IPv6 Support section in the “Configuring InterSystems IRIS” chapter of the System Administration Guide.Important:
When adding a mirror primary as a data server (see the Mirror Connection setting), do not enter the virtual IP address (VIP) of the mirror, but rather the DNS name or IP address of the current primary failover member.
IP Port — The port number defaults to 1972, the default InterSystems IRIS superserver (IP) port; change it as necessary to the superserver port of the InterSystems IRIS instance on the data server.
Mirror Connection — Select this checkbox if the data server is the primary failover member in a mirror. (See Configuring Application Server Connections to a Mirror in the “Configuring Mirroring” chapter of the High Availability Guide for important information about configuring a mirror primary as a data server.)
Use SSL/TLS — Use this checkbox as follows:
If the ECP SSL/TLS support setting for the data server you are adding is Disabled, it does not matter whether you select this checkbox; TLS will not be not used to secure connections to the data server.
If the ECP SSL/TLS support setting for the data server you are adding is Enabled, select this checkbox to use TLS to secure connections to this data server; clear it to not use TLS.
If the ECP SSL/TLS support setting for the data server you are adding is Required, you must select this checkbox.
If the ECP SSL/TLS support setting for the data server you are adding is Enabled or Required and you have not yet created a TLS configuration for the application server, click the Set up SSL/TLS ‘%ECPClient’ link to do so. For more information on using TLS in a distributed cache cluster, , including authorization of secured application server connections on the data server, see Securing Application Server Connections to the Data Server with TLS.
Click Save. The data server appears in the data server list; you can remove or edit the data server definition, or change its status (see Monitoring Distributed Applications) using the available links. You can also view a list of all application servers connecting to a data server by going to the ECP Settings page on the data server and clicking the Application Servers button.
To add each desired database on the data server as a remote database on the application server, you must create a namespace on the application server and map it to that database, as follows:
Navigate to the Namespaces page by selecting System Administration on the Management Portal home page, then Configuration, then System Configuration, then Namespaces. Click Create New Namespace to display the New Namespace page.
Enter a name for the new namespace, which typically reflects the name of the remote database it is mapped to.
At The default database for Globals in this namespace is a, select Remote Database, then select Create New Database to open the Create Remote Database dialog. In this dialog,
Select the data server from the Remote Server drop-down.
Leave Remote directory set to Select directory from a list and select the data server database you want to map to the namespace using the Directory drop-down, which lists all of the database directories on the data server.
Enter a local name for the remote database; this typically reflects the name of the database on the data server, the local name of the data server as specified in the previous procedure, or both.
Click Finish to add the remote database and map it to the new namepsace.
At The default database for Routines in this namespace is a, select Remote Database, then select the database you just created from the drop-down.
The namespace does not need to be interoperability-enabled; to save time, you can clear the Enable namespace for interoperability productions checkbox.
Select Save. The new namespace now appears in the list on the Namespaces list.
Once you have added a data server database as a remote database on the application server, applications can query that database through the namespace it is mapped to on the application server.
Remember that even though a namespace on the application server is mapped to a database on the data server, changes to the namespace mapped to that database on the data server are unknown to the application server. (For information about mapping, see Global Mappings in the “Configuring InterSystems IRIS” chapter of the System Administration Guide.) For example, suppose the namespace DATA on the data server has the default globals database DATA; on the application server, the namespace REMOTEDATA is mapped to the same (remote) database, DATA. If you create a mapping in the DATA namespace on the data server mapping the global ^DATA2 to the DATA2 database, this mapping is not propagated to the application server. Therefore, if you do not add DATA2 as a remote database on the application server and create the same mapping in the REMOTEDATA namespace, queries the application server receives will not be able to read the ^DATA2 global.
Distributed Cache Cluster Security
All InterSystems instances in a distributed cache cluster need to be within the secured InterSystems IRIS perimeter (that is, within an externally secured environment). This is because ECP is a basic security service, rather than a resource-based service, so there is no way to regulate which users have access to it. (For more information on basic and resource-based services, see Available Services.)
However, the following security tools are available:
Securing application server connections to the data server with TLS
When databases are encrypted on the data servers, you should also encrypt the IRISTEMP database on all connected application servers. The same or different keys can be used. For more information on database encryption, see Encryption Guide.
Securing Application Server Connections to a Data Server with TLS
If TLS is enabled on a data server, you can use it to secure connections from an application server to that data server. This protection includes X.509 certificate-based encryption. For detailed information about TLS and its use with InterSystems products, see InterSystems TLS Guide.
When configuring or editing a data server or at any time thereafter (see Preparing the Data Server), you can select Enabled or Required as the ECP SSL/TLS support setting, rather than the default Disabled. These settings control the options for use of the Use SSL/TLS checkbox, which secures connections to a data server with TLS, when adding a data server to an application server (see Configuring an Application Server) or editing an existing data server. These settings have the following effect:
Disabled — The use of TLS for application server connections to this data server is disabled, even for an application server on which Use SSL/TLS is selected.
Enabled — The use of TLS for application server connections is enabled on the data server; TLS is used for connections from application servers on which Use SSL/TLS is selected, and is not used for connections from application servers on which Use SSL/TLS is not selected.
Required — The data server requires application server connections to use TLS; an application server can connect to the data server only if Use SSL/TLS is selected for the data server, in which case TLS is used for all connections.
There are three requirements for establishing a connection from an application server to a data server using TLS, as follows:
The data server must have TLS connections enabled for superserver clients. To do this, on the data server, navigate to the System-wide Security Parameters page (System Administration > Security > System Security > System-wide Security Parameters) and select Enabled for the Superserver SSL/TLS support setting.
Both instances must have an ECP TLS configuration.
For this reason, both sides of the ECP Settings page (System Administration > Configuration > Connectivity > ECP Settings) — This System as an ECP Application Server and This System as an ECP Data Server — include a Set Up SSL/TLS link, which you can use to create the appropriate ECP TLS configuration for the instance. To do so, follow this procedure:
On the ECP Settings page, click Set up SSL/TLS ‘%ECPClient’ link on the application server side or the Set up SSL/TLS ‘%ECPServer’ link on the data server side.
Complete the fields on the form in the Edit SSL/TLS Configurations for ECP dialog, These are analogous to those on the New SSL/TLS Configuration page, as described in Create or Edit a TLS Configuration. However, there are no Configuration Name, Description, or Enabled fields; also, for the private key password, this page allows you to enter or replace one (Enter new password), specify that none is to be used (Clear password), or leave an existing one as it is (Leave as is).
Fields on this page are:
File containing trusted Certificate Authority X.509 certificate(s)
The path and name of a file that contains the X.509 certificate(s) in PEM format of the Certificate Authority (CA) or Certificate Authorities that this configuration trusts. You can specify either an absolute path or a path relative to the install-dir/mgr/ directory. For detailed information about X.509 certificates and their generation and use, see InterSystems TLS Guide.Note:
This file must include the certificate(s) that can be used to verify the X.509 certificates belonging to other mirror members. If the file includes multiple certificates, they must be in the correct order, as described in Establishing the Required Certificate Chain, with the current instance’s certificate first.
File containing this configuration's X.509 certificate
The full location of the configuration’s own X.509 certificate(s), in PEM format. This can be specified as either an absolute or a relative path.Note:
The certificate’s distinguished name (DN) must appear in the certificate’s subject field.
File containing associated private key
The full location of the configuration’s private key file, specified as either an absolute or relative path.
Private key type
The algorithm used to generate the private key, where valid options are DSA and RSA.
Select Enter new password when you are creating an ECP TLS configuration, so you can enter and confirm the password for the private key associated with the certificate.
Those communications protocols that the configuration considers valid; TLSv1.1, and TLSv1.2 are enabled by default.
The set of ciphersuites used to protect communications between the client and the server. Typically you can leave this at the default setting.
Once you complete the form, click Save.
An application server must be authorized on a data server before it can connect using TLS.
The first time an application server attempts to connect to a data server using TLS, its SSL (TLS) computer name (the Subject Distinguished Name from its X.509 certificate) and the IP address of its host are displayed in a list of pending ECP application servers to be authorized or rejected on the data server’s Application Servers page (System Administration > Configuration > Connectivity > ECP Settings > Application Servers). Use the Authorize and Reject links to take action on requests in the list. (If there are no pending requests, the list does not display.)
If one or more application servers have been authorized to connect using TLS, their SSL (TLS) computer names are displayed in a list of authorized SSL computer names for ECP application servers on the Application Servers page. You can use the Delete link to cancel the authorization. (If there are no authorized application servers, the list does not display.)
Restricting Incoming Access to a Data Server
By default, any InterSystems IRIS instance on which the data server instance is configured as a data server (as described in the previous section) can connect to the data server. However, you can restrict which instances can act as application servers for the data server by specifying the hosts from which incoming connections are allowed; if you do this, hosts not explicitly listed cannot connect to the data server. Do this by performing the following steps on the data server:
On the Services page (from the portal home page, select Security and then Services), click %Service_ECP. The Edit Service dialog displays.
By default, the Allowed Incoming Connections box is empty, which means any application server can connect to this instance if the ECP service is enabled; click Add and enter a single IP address (such as 126.96.36.199) or fully-qualified domain name (such as mycomputer.myorg.com), or a range of IP addresses (for example,8.61.202–210.* or 18.68.*.*). Once there are one or more entries on the list and you click Save in the Edit Service dialog, only the hosts specified by those entries can connect.
You can always access the list as described and use a Delete to delete the host from the list or an Edit link to specify the roles associated with the host, as described in Controlling Access with Roles and Privileges.
Controlling Access to Databases with Roles and Privileges
InterSystems uses a security model in which assets, including databases, are assigned to resources, and resources are assigned permissions, such as read and write. A combination of a resource and a permission is called a privilege. Privileges are assigned to roles, to which users can belong. In this way, roles are used to control user access to resources. For information about this model, see Authorization: Controlling User Access.
To be granted access to a database on the data server, the role held by the user initiating the process on the application server and the role set for the ECP connection on the data server must both include permissions for the same resource representing that database. For example, if a user belongs to a role on an application server that grants the privilege of read permission for a particular database resource, and the role set for the ECP connection on the data server also includes this privilege, the user can read data from the database on the application server.
By default, InterSystems IRIS grants ECP connections on the data server the %All privilege when the data server runs on behalf of an application server. This means that whatever privileges the user on the application server has are matched on the data server, and access is therefore controlled only on the application server. For example, a user on the application server who has privileges only for the %DB_USER resource but not the %DB_IRISLIB resource can access data in the USER database on the data server, but attempting to access the IRISLIB database on the data server results in a <PROTECT> error. If a different user on the application server has privileges for the %DB_IRISLIB resource, the IRISLIB database is available to that user.
InterSystems recommends the use of an LDAP server to implement centralized security. including user roles and privileges, across the application servers of a distributed cache cluster. For information about using LDAP with InterSystems IRIS, see LDAP Guide.
However, you can also restrict the roles available to ECP connections on the data server based on the application server host. For example, on the data server you can specify that when interacting with a specific application server, the only available role is %DB_USER. In this case, users on the application server granted the %DB_USER role can access the USER database on the data server, but no users on the application server can access any other database on the data server regardless of the roles they are granted.
InterSystems strongly recommends that you secure the cluster by specifying available roles for all application servers in the cluster, rather than allowing the data server to continue to grant the %All privilege to all ECP connections.
The following are exceptions to this behavior:
InterSystems IRIS always grants the data server the %DB_IRISSYS role since it requires Read access to the IRISSYS database to run. This means that a user on an application server with %DB_IRISSYS can access the IRISSYS database on the data server.
To prevent a user on the application server from having access to the IRISSYS database on the data server, there are two options:
Do not grant the user privileges for the %DB_IRISSYS resource.
On the data server, change the name of the resource for the IRISSYS database to something other than %DB_IRISSYS, making sure that the user on the application server has no privileges for that resource.
If the data server has any public resources, they are available to any user on the ECP application server, regardless of either the roles held on the application server or the roles configured for the ECP connection.
To specify the available roles for ECP connections from a specific application server on the data server, do the following:
Go to the Services page (from the portal home page, select Security and then Services) and click %Service_ECP to display the Edit Service dialog.
Click the Edit link for the application server host you want to restrict to display the Select Roles area.
To specify roles for the host, select roles from those listed under Available and click the right arrow to add them to the Selected list.
To remove roles from the Selected list, select them and then click the left arrow.
To add all roles to the Selected list, click the double right arrow; to remove all roles from the Selected list, click the double left arrow.
Click Save to associate the roles with the IP address.
By default, a listed host holds the %All role, but if you specify one or more other roles, these roles are the only roles that the connection holds. Therefore, a connection from a host or IP range with the %Operator role has only the privileges associated with that role, while a connection from a host with no associated roles (and therefore %All) has all privileges.
Changes to the roles available to application server hosts and to the public permissions on resources on the data server require a restart of InterSystems IRIS before taking effect.
Security-Related Error Reporting
The behavior of security-related error reporting with ECP varies depending on whether the check fails on the application server or the data server and the type of operation:
If the check fails on the application server, there is an immediate <PROTECT> error.
For synchronous operations on the data server, there is an immediate <PROTECT> error.
For asynchronous operations on the data server, there is a possibly delayed <NETWORK DATA UPDATE FAILED> error. This includes Set operations.
Monitoring Distributed Cache Applications
A running distributed cache cluster consists of a data server instance — a data provider — connected to one or more application server systems—data consumers. Between each application server and the data server, there is an ECP connection — a TCP/IP connection that ECP uses to send data and commands.
You can monitor the status of the servers and connections in a distributed cache cluster on the ECP Settings page (System Administration > Configuration > Connectivity > ECP Settings).
The ECP Settings page has two subsections:
This System as an ECP Data Server displays settings for the data server as well as the status of the ECP service.
This System as an ECP Application Server displays settings for the application server.
The following sections describe status information for connections:
ECP Connection Information
Click the Data Servers button on the ECP Data Servers Settings page (System Administration > Configuration > Connectivity > ECP Settings) to display the ECP Data Servers page, which lists the current data server connections on the application server. The ECP Application Servers page, which you can display by clicking the Application Servers button on the ECP Settings page, contains a list of the current application server connections on the data server.
Data Server Connections
The ECP Data Servers page displays the following information for each data server connection:
The logical name of the data server system on this connection, as entered when the server was added to the application server configuration.
The host name of the data server system, as entered when the server was added to the application server configuration.
The IP port number used to connect to the data server.
The current status of this connection. Connection states are described in the ECP Connection States section.
If the current status of this connection is Not Connected or Disabled, you can edit the port and host information of the data server.
From each data server row you can change the status of an existing ECP connection with that data server; see the ECP Connection Operations section for more information.
You can delete the data server information from the application server.
Application Server Connections
Click ECP Application Servers on the ECP Settings page (System Administration > Configuration > Connectivity > ECP Settings) to view the ECP Application Servers page with a list of application server connections on this data server:
The logical name of the application server on this connection.
The current status of this connection. Connection states are described in the ECP Connection States section.
The host name or IP address of the application server
The port number used to connect to the application server.
ECP Connection States
In an operating cluster, an ECP connection can be in one of the following states:
|Not Connected||The connection is defined but has not been used yet.|
|Connection in Progress||The connection is in the process of establishing itself. This is a transitional state that lasts only until the connection is established.|
|Normal||The connection is operating normally and has been used recently.|
|Trouble||The connection has encountered a problem. If possible, the connection automatically corrects itself.|
|Disabled||The connection has been manually disabled by a system administrator. Any application making use of this connection receives a <NETWORK> error.|
The following sections describe each connection state as it relates to application servers or the data server:
Application Server Connection States
The following sections describe the application server side of each of the connection states:
An application server-side ECP connection starts out in the Not Connected state. In this state, there are no ECP daemons for the connection. If an application server process makes a network request, daemons are created for the connection and the connection enters the Connection in Progress state.
In the Connection in Progress state, a network daemon exists for the connection and actively tries to establish a connection to the data server; when the connection is established, it enters the Normal state. While the connection is in the Connection in Progress state, the user process must wait for up to 20 seconds for it to be established. If the connection is not established within that time, the user process receives a <NETWORK> error.
The application server ECP daemon attempts to create a new connection to the data server in the background. If no connection is established within 20 minutes, the connection returns to the Not Connected state and the daemon for the connection goes away.
After a connection completes, it enters the Normal (data transfer) state. In this state, the application server-side daemons exist and actively send requests and receive answers across the network. The connection stays in the Normal state until the connection becomes unworkable or until the application server or the data server requests a shutdown of the connection.
If the connection from the application server to the data server encounters problems, the application server ECP connection enters the Trouble state. In this state, application server ECP daemons exist and are actively try to restore the connection. An underlying TCP connection may or may not still exist. The recovery method is similar whether or not the underlying TCP connection gets reset and must be recreated, or if it stops working temporarily.
During the application server Time to wait for recovery timeout (default of 20 minutes), the application server attempts to reconnect to the data server to perform ECP connection recovery. During this interval, existing network requests are preserved, but the originating application server-side user process blocks new network requests, waiting for the connection to resume. If the connection returns within the Time to wait for recovery timeout, it returns to the Normal state and the blocked network requests proceed.
For example, if a data server goes offline, any application server connected to it has its state set to Trouble until the data server becomes available. If the problem is corrected gracefully, a connection’s state reverts to Normal; otherwise, if the trouble state is not recovered, it reverts to Not Connected.
Applications continue running until they require network access. All locally cached data is available to the application while the server is not responding.
Transitional recovery states are part of the Trouble state. If there is no current TCP connection to the data server, and a new connection is established, the application server and data server engage in a recovery protocol which flushes the application server cache, recovers transactions and locks, and returns to the Normal state.
Similarly, if the data server shuts down, either gracefully or as a result of a crash, and then restarts, it enters a short period (approximately 30 seconds) during which it allows application servers to reconnect and recover their existing sessions. Once again, the application server and the data server engage in the recovery protocol.
If connection recovery is not complete within the Time to wait for recovery timeout, the application server gives up on connection recovery. Specifically, the application server returns errors to all pending network requests and changes the connection state to Not Connected. If it has not already done so, the data server rolls back all the transactions and releases all the locks from this application server the next time this application server connects to the data server.
If the recovery is successful, the connection returns to the Normal state and the blocked network requests proceed.
An ECP connection is marked Disabled if an administrator declares that it is disabled. In this state, no daemons exist and any network requests that would use that connection immediately receive <NETWORK> errors.
Data Server Connection States
The following sections describe the data server side of each of the connection states:
When an ECP server instance starts up, all incoming ECP connections are in an initial “unassigned” Free state and are available for connections from any application server that is listed in the connection access control list. If a connection from an application server previously existed and has since gone away, but does not require any recovery steps, the connection is placed in the “idle” Free state. The only difference between these two states is that in the idle state, this connection block is already assigned to a particular application server, rather than being available for any application server that passes the access control list.
In the data server Normal state, the application server connection is normal. At any point in the processing of incoming connections, whenever the application server disconnects from the data server (except as part of the data server’s own shutdown sequence), the data server rolls back any pending transactions and releases any incoming locks from that application server, and places the application server connection in the “idle” Free state.
If the application server is not responding, the data server shows a Trouble state. If the data server crashes or shuts down, it remembers the connections that were active at the time of the crash or shutdown. After restarting, the data server waits for a brief time (usually 30 seconds) for application servers to reclaim their sessions (locks and open transactions). If an application server does not complete recovery during this awaiting recovery interval, all pending work on that connection is rolled back and the connection is placed in the “idle” state.
The data server connection is in a recovery state for a very short time when the application server is in the process of reclaiming its session. The data server keeps the application server in trouble state for the Time interval for Troubled state timeout (default is 60 seconds) for it to reclaim the connection; otherwise, it releases the application resources (rolls back all open transactions and releases locks) and then sets the state to Free.
ECP Connection Operations
On the ECP Data Servers page (System Administration > Configuration > Connectivity > ECP Settings, click Data Servers button) on an application server, you can change the status of the ECP connection. In each data server row, click Change Status to display the connection information and perform the appropriate selection of the following choices:
Set the state of this connection to Disabled. This releases any locks held for the application server, rolls back any open transactions involving this connection, and purges cached blocks from the data server. If this is an active connection, the change in status sends an error to all applications waiting for network replies from the data server.
Set the state of this connection to Normal.
Set the state of this connection to Not Connected. As with changing the state to Disabled, this releases any locks held for the application server, rolls back any open transactions involving this connection, and purges cached blocks from the data server. If this is an active connection, the change in status sends an error to all applications waiting for network replies from the data server.
Developing Distributed Cache Applications
This chapter discusses application development and design issues that are helpful if you would like to deploy your application on a distributed cache cluster, either as an option or as its primary configuration.
With InterSystems IRIS, the decision to deploy an application as a distributed system is primarily a runtime configuration issue (see Deploying a Distributed Cache Cluster). Using InterSystems IRIS configuration tools, map the logical names of your data (globals) and application logic (routines) to physical storage on the appropriate system.
This chapter discusses the following topics:
ECP Recovery Protocol
ECP is designed to automatically recover from interruptions in connectivity between an application server and the data server. In the event of such an interruption, ECP executes a recovery protocol that differs depending on the nature of the failure. The result is that the connection is either recovered, allowing the application processes to continue as though nothing had happened, or reset, forcing transaction rollback and rebuilding of the application processes. The main principles are as follows:
When the connection between an application server and data server is interrupted, the application server attempts to reestablish its connection with the data server, repeatedly if necessary, at an interval determined by the Time between reconnections setting (5 seconds by default).
When the interruption is brief, the connection is recovered.
If the connection is reestablished within the data server’s configured Time interval for Troubled state timeout period (60 seconds by default), the data server restores all locks and open transactions to the state they were in prior to the interruption.
If the interruption is longer, the data server resets the connection, so that it cannot be recovered when the interruption ends.
If the connection is not reestablished within the Time interval for Troubled state, the data server unilaterally resets the connection, allowing it to roll back transactions and release locks from the unresponsive application server so as not to block functioning application servers. When connectivity is restored, the connection is disabled from the application server point of view; all processes waiting for the data server on the interrupted connection receive a <NETWORK> error and enter a rollback-only condition. The next request received by the application server establishes a new connection to the data server.
If the interruption is very long, the application server also resets the connection.
If the connection is not reestablished within the application server’s longer Time to wait for recovery timeout period (20 minutes by default), the application server unilaterally resets the connection; all processes waiting for the data server on the interrupted connection receive a <NETWORK> error and enter a rollback-only condition. The next request received by the application server establishes a new connection to the data server, if possible.
The ECP timeout settings are shown in the following table. Each is configurable on the System > Configuration > ECP Settings page of the Management Portal, or in the ECP section of in the configuration parameter file (CPF); for more information, see ECP in the Configuration Parameter File Reference.
|Management Portal Setting||CPF Setting||Default||Range|
|Time between reconnections||ClientReconnectInterval||5 seconds||1–60 seconds||The interval at which an application makes attempts to reconnect to the data server.|
|Time interval for Troubled state||ServerTroubleDuration||60 seconds||20–65535 seconds||The length of time for which the data server waits for contact from the application server before resetting an interrupted connection.|
|Time to wait for recovery||ClientReconnectDuration||1200 seconds (20 minutes)||10–65535 seconds||The length of time for which an application server continues attempting to reconnect to the data server before resetting an interrupted connection.|
The default values are intended to do the following:
Avoid tying up data server resources that could be used for other application servers for a long time by waiting for an application server to become available.
Give an application server — which has nothing else to do when the data server is not available — the ability to wait out an extended connection interruption for much longer by trying to reconnect at frequent intervals.
ECP relies on the TCP physical connection to detect the health of the instance at the other end without using too much of its capacity. On most platforms,you can adjust the TCP connection failure and detection behavior at the system level.
While an application server connection becomes inactive, the data server maintains an active daemon waiting for new requests to arrive on the connection, or for a new connection to be requested by the application server. If the old connection returns, it can immediately resume operation without recovery. When the underlying heartbeat mechanism indicates that the application server is completely unavailable due to a system or network failure, the underlying TCP connection is quickly reset. Thus, an extended period without a response from an application server generally indicates some kind of problem on the application server that caused it to stop functioning, but without interfering with its connections.
If the underlying TCP connection is reset, the data server puts the connection in an “awaiting reconnection” state in which there is no active ECP daemon on the data server. A new pair of data server daemons are created when the next incoming connection is requested by the application server.
Collectively, the nonresponsive state and the awaiting reconnection state are known as the data server Trouble state. The recovery required in both cases is very similar.
If the data server fails or shuts down, it remembers the connections that were active at the time of the crash or shutdown. After restarting, the data server has a short window (usually 30 seconds) during which it places these interrupted connections in the awaiting reconnection state. In this state, the application server and data server can cooperate together to recover all the transaction and lock states as well as all the pending Set and Kill transactions from the moment of the data server shutdown.
During the recovery of an ECP-configured instance, InterSystems IRIS guarantees a number of recoverable semantics, and also specifies limitations to these guarantees. ECP Recovery Process, Guarantees, and Limitations describes these in detail, as well as providing additional details about the recovery process.
By default, ECP automatically manages the connection between an application server and a data server. When an ECP-configured instance starts up, all connections between application servers and data servers are in the Not Connected state (that is, the connection is defined, but not yet established). As soon as an application server makes a request (for data or code) that requires a connection to the data server, the connection is automatically established and the state changes to Normal. The network connection between the application server and data server is kept open indefinitely.
In some applications, you may wish to close open ECP connections. For example, suppose you have a system, configured as an application server, that periodically (a few times a day) needs to fetch data stored on a data server system, but does not need to keep the network connection with the data server open afterwards. In this case, the application server system can issue a call to the SYS.ECP.ChangeToNotConnected()Opens in a new tab method to force the state of the ECP connection to Not Connected.
Do OperationThatUsesECP() Do SYS.ECP.ChangeToNotConnected("ConnectionName")
The ChangeToNotConnected method does the following:
Completes sending any data modifications to the data server and waits for acknowledgment from the data server.
Removes any locks on the data server that were opened by the application server.
Rolls back the data server side of any open transactions. The application server side of the transaction goes into a “rollback only” condition.
Completes pending requests with a <NETWORK> error.
Flushes all cached blocks.
After completion of the state change to Not Connected, the next request for data from the data server automatically reestablishes the connection.
See Data Server Connections for information about changing data server connection status from the Management Portal.
Performance and Programming Considerations
To achieve the highest performance and reliability from distributed cache cluster-based applications, you should be aware of the following issues:
Increase Data Server Database Caches for ECP Control Structures
Do Not Use Multiple ECP Channels
InterSystems strongly discourages establishing multiple duplicate ECP channels between an application server and a data server to try to increase bandwidth. You run the dangerous risk of having locks and updates for a single logical transaction arrive out-of-sync on the data server, which may result in data inconsistency.
Increase Data Server Database Caches for ECP Control Structures
In addition to buffering the blocks that are served over ECP, data servers use global buffers to store various ECP control structures. There are several factors that go into determining how much memory these structures might require, but the most significant is a function of the aggregate sizes of the clients' caches. To roughly approximate the requirements, so you can adjust the data server’s database caches if needed, use the following guidelines:
|Database Block Size||Recommendation|
|8 KB||50 MB plus 1% of the sum of the sizes of all of the application servers’ 8 KB database caches|
|16 KB (if enabled)||0.5% of the sum of the sizes of all of the application servers’ 16 KB database caches|
|32 KB (if enabled)||0.25% of the sum of the sizes of all of the application servers’ 32 KB database caches|
|64 KB (if enabled)||0.125% of the sum of the sizes of all of the application servers’ 64 KB database caches|
For example, if the 16 KB block size is enabled in addition to the default 8 KB block size, and there are six application servers, each with an 8 KB database cache of 2 GB and a 16 KB database cache of 4 GB, you should adjust the data server’s 8 KB database cache to ensure that 52 MB (50MB + [12 GB * .01]) is available for control structures, and the 16 KB cache to ensure that 2 MB (24 GB * .005) is available for control structures (rounding up in both cases).
For information about allocating memory to database caches, see Memory and Startup Settings in the “Configuring InterSystems IRIS” chapter of the System Administration Guide.
Evaluate the Effects of Load Balancing User Requests
Connecting users to application servers in a round-robin or load balancing scheme may diminish the benefit of caching on the application server. This is particularly likely if users work in functional groups that have a tendency to read the same data. As these users are spread among application servers, each application server may end up requesting exactly the same data from the data server, which not only diminishes the efficiency of distributed caching using multiple caches for the same data, but can also lead to increased block invalidation as blocks are modified on one application server and refreshed across other application servers. This is somewhat subjective, but someone very familiar with the application characteristics should consider this possible condition.
Restrict Transactions to a Single Data Server
Restrict updates within a single transaction to either a single remote data server or the local server. When a transaction includes updates to more than one server (including the local server) and the TCommit cannot complete successfully, some servers that are part of the transaction may have committed the updates while others may have rolled them back. For details, see Commit Guarantee in “ECP Recovery Guarantees and Limitations”.
Updates to IRISTEMP are not considered part of the transaction for the purpose of rollback, and, as such, are not included in this restriction.
Locate Temporary Globals on the Application Server
Temporary (scratch) globals should be local to the application server, assuming they do not contain data that needs to be globally shared. Often, temporary globals are highly active and write-intensive. If temporary globals are located on the data server, this may penalize other application servers sharing the ECP connection.
Avoid Repeated References to Undefined Globals
Repeated references to a global that is not defined (for example, $Data(^x(1)) where ^x is not defined) always requires a network operation to test if the global is defined on the data server.
By contrast, repeated references to undefined nodes within a defined global (for example, $Data(^x(1)) where any other node in ^x is defined) does not require a network operation once the relevant portion of the global (^x) is in the application server cache.
This behavior differs significantly from that of a non-networked application. With local data, repeated references to the undefined global are highly optimized to avoid unnecessary work. Designers porting an application to a networked environment may wish to review the use of globals that are sometimes defined and sometimes not. Often it is sufficient to make sure that some other node of the global is always defined.
Avoid the Use of Stream Fields
A stream fieldOpens in a new tab within a query results in a read lock, which requires a connection to the data server. For this reason, such queries do not benefit from the database cache, which means their performance does not improve on the second and subsequent invocation.
Use the $Increment Function for Application Counters
A common operation in online transaction processing systems is generating a series of unique values for use as record numbers or the like. In a typical relational application, this is done by defining a table that contains a “next available” counter value. When the application needs a new identifier, it locks the row containing the counter, increments the counter value, and releases the lock. Even on a single-server system, this becomes a concurrency bottleneck: application processes spend more and more time waiting for the locks on this common counter to be released. In a networked environment, it is even more of a bottleneck at some point.
InterSystems IRIS addresses this by providing the $Increment function, which automatically increments a counter value (stored in a global) without any need of application-level locking. Concurrency for $Increment is built into the InterSystems IRIS database engine as well as ECP, making it very efficient for use in single-server as well as in distributed applications.
Applications built using the default structures provided by InterSystems IRIS objects (or SQL) automatically use $Increment to allocate object identifier values. $Increment is a synchronous operation involving journal synchronization when executed over ECP. For this reason, $Increment over ECP is a relatively slow operation, especially compared to others which may or may not already have data cached (in either the application server database cache or the data server database cache). The impact of this may be even greater in a mirrored environment due to network latency between the failover members. For this reason, it may be useful to redesign an application to replace $Increment with the $Sequence function, which automatically assigns batches of new values to each process on each application server, involving the data server only when a new batch of values is needed. (This approach cannot be used, however, when consecutive application counter values are required.) $Sequence can also be used in combination with $Increment.
There are several runtime errors that can occur on a system using ECP. An ECP-related error may occur immediately after a command is executed or, in the case of commands that are asynchronous in nature, such as Kill, the error occurs a short time after the command completes.
A <NETWORK> error indicates an error that could not be handled by the normal ECP recovery mechanism.
In an application, it is always acceptable to halt a process or roll back any pending work whenever a <NETWORK> error is received. Some <NETWORK> errors are essentially fatal error conditions. Others indicate a temporary condition that might clear up soon; however, the expected programming practice is to always roll back any pending work in response to a <NETWORK> error and start the current transaction over from the beginning.
A <NETWORK> error on a get-type request such as $Data or $Order can often be retried manually rather than simply rolling back the transaction immediately. ECP tries to avoid giving a <NETWORK> error that would lose data, but gives an error more freely for requests that are read-only.
Rollback Only Condition
The application-side rollback-only condition occurs when the data server detects a network failure during a transaction initiated by the application server and enters a state in which all network requests are met with errors until the transaction is rolled back.
ECP Recovery Process, Guarantees, and Limitations
The ECP recovery protocol is summarized in ECP Recovery Protocol. This section describes ECP recovery in detail, including its guarantees and limitations.
The simplest case of ECP recovery is a temporary network interruption that is long enough to be noticed, but short enough that the underlying TCP connection stays active during the outage. During the outage, the application server notices that the connection is nonresponsive and blocks new network requests for that connection. Once the connection resumes, processes that were blocked are able to send their pending requests.
If the underlying TCP connection is reset, the data server waits for a reconnection for the Time interval for Troubled state setting (one minute by default). If the application server does not succeed in reconnecting during that interval, the data server resets its connection, rolls back its open transactions, and releases its locks. Any subsequent connection from that application server is converted into a request for a brand new connection and the application server is notified that its connection is reset.
The application server keeps a queue of locks to remove and transactions to roll back once the connection is reestablished. By keeping this queue, processes on the application server can always halt, whether or not the data server on which it has pending transactions and locks is currently available. ECP recovery completes any pending Set and Kill operations that had been queued for the data server before the network outage was detected, before it completes the release of locks.
Any time a data server learns that an application server has reset its own connection (due to application server restart, for example), even if it is still within the Time interval for Troubled state, the data server resets the connection immediately, rolling back transactions and releasing locks on behalf of that application server. Since the application server’s state was reset, there is no longer any state to be maintained by the data server on its behalf.
The final case is when the data server shut down, either gracefully or as a result of a crash. The application server maintains the application state and tries to reconnect to the data server for the Time to wait for recovery setting (20 minutes by default). The data server remembers the application server connections that were active at the time of the crash or shutdown; after restarting, it waits up to thirty seconds for those application servers to reconnect and recover their connections. Recovery involves several steps on the data server, some of which involve the data server journal file in very significant ways. The result of the several different steps is that:
The data server’s view of the current active transactions from each application server has been restored from the data server’s journal file.
The data server’s view of the current active Lock operations from each application server has been restored, by having the application server upload those locks to the data server.
The application server and the data server agree on exactly which requests from the application server can be ignored (because it is certain they completed before the crash) and which ones should be replayed. Therefore, the last recovery step is to simply let the pending network requests complete, but only those network requests that are safe to replay.
Finally, the application server delivers to the data server any pending unlock or rollback indications that it saved from jobs that halted while the data server was restarting. All guarantees are maintained, even in the face of sudden and unanticipated data server crashes, as long as the integrity of the storage devices (for database, WIJ, and journal files) are maintained.
During the recovery of an ECP-configured system, InterSystems IRIS guarantees a number of recoverable semantics which are described in detail in ECP Recovery Guarantees. Limitations to these guarantees are described in detail in the ECP Recovery Limitations section of the aforementioned appendix.
ECP Recovery Guarantees
During the recovery of an ECP-configured system, InterSystems IRIS guarantees the following recoverable semantics:
In the description of each guarantee the first paragraph describes a specific condition. Subsequent paragraphs describe the data guarantee applicable to that particular situation.
In these descriptions, Process A, Process B and so on refer to processes attempting update globals on a data server. These processes may originate on the same or different application servers, or on the data server itself; in some cases the origins of processes are specified, in others they are not germane.
In-order Updates Guarantee
Process A updates two data elements sequentially, first global ^x and next global ^y, where ^x and ^y are located on the same data server.
If Process B sees the change to ^y, it also sees the change to ^x. This guarantee applies whether or not Process A and Process B are on the same application server as long as the two data items are on the same data server and the data server remains up.
Process B’s ability to view the data modified by Process A does not ensure that Set operations from Process B are restored after the Set operations from Process A. Only a Lock or a $Increment operation can ensure proper ordering of competing Set and Kill operations from two different processes during cluster failover or cluster recovery.
See the Loose Ordering in Cluster Failover or Restore limitation regarding the order in which competing Set and Kill operations from separate processes are applied during cluster dejournaling and cluster failover.
This guarantee does not apply if the data server crashes, even if ^x and ^y are journaled. See the Dirty Data Reads for ECP Without Locking limitation for a case in which processes that fit this description can see dirty data that never becomes durable before the data server crash.
ECP Lock Guarantee
Process B on DataServer S acquires a lock on global ^x, which was once locked by Process A.
Process B can see all updates on DataServer S done by Process A (while holding a lock on ^x). Also, if Process C sees the updates done by Process B on DataServer S (while holding a lock on ^x), Process C is guaranteed to also see the updates done by Process A on DataServer S (while holding a lock on ^x).
Serializability is guaranteed whether or not Process A, Process B, and Process C are located on the same application server or on DataServer S itself, as long as DataServer S stays up throughout.
The lock and the data it protects must reside on the same data server.
Clusters Lock Guarantee
Process B on a cluster member acquires a lock on global ^x in a clustered database; a lock once held by Process A.
Process B sees all updates to any clustered database done by Process A (while holding a lock on ^x).
Additionally, if Process C on a cluster member sees the updates on a clustered database made by Process B (while holding a lock on ^x), Process C also sees the updates made by Process A on any clustered database (while holding a lock on ^x).
Serializability is guaranteed whether or not Process A, Process B, and Process C are located on the same cluster member, and whether or not any cluster member crashes.
See the Dirty Data Reads When Cluster Member Crashes limitation regarding transactions on one cluster member seeing dirty data from a transaction on a cluster member that crashes.
Process A executes a TStart command, followed by a series of updates, and either halts before issuing a TCommit, or executes a TRollback before executing a TCommit.
All the updates made by Process A as part of the transaction are rolled back in the reverse order in which they originally occurred.
See the rollback-related limitations: Conflicting, Non-Locked Change Breaks Rollback, Journal Discontinuity Breaks Rollback, and Asynchronous TCommit Converts to Rollback for more information.
Process A makes a series of updates on DataServer S and halts after starting the execution of a TCommit.
On each DataServer S that is part of the transaction, the data modifications on DataServer S are either committed or rolled back. If the process that executes the TCommit has the Perform Synchronous Commit property turned on (SynchCommit=1, in the configuration file) and the TCommit operation returns without errors, the transaction is guaranteed to have durably committed on all the data servers that are part of the transaction.
If the transaction includes updates to more than one server (including the local server) and the TCommit cannot complete successfully, some servers that are part of the transaction may have committed the updates while others may have rolled them back.
Transactions and Locks Guarantee
Process A executes a TStart for Transaction T, locks global ^x on DataServer S, and unlocks ^x (unlock does not specify the “immediate unlock” lock type).
InterSystems IRIS guarantees that the lock on ^x is not released until the transaction has been either committed or rolled back. No other process can acquire a lock on ^x until Transaction T either commits or rolls back on DataServer S.
Once Transaction T commits on DataServer S, Process B that acquires a lock on ^x sees changes on DataServer S made by Process A during Transaction T. Any other process that sees changes on DataServer S made by Process B (while holding a lock on ^x) sees changes on DataServer S made by Process A (while executing Transaction T). Conversely, if Transaction T rolled back on DataServer S, a Process B that acquires a lock on ^x, sees none of the changes made by Process A on DataServer S.
See the Conflicting, Non-Locked Change Breaks Rollback limitation for more information.
ECP Rollback Only Guarantee
Process A on AppServer C makes changes on DataServer S that are part of a Transaction T, and DataServer S unilaterally rolls back those changes (which can happen in certain network outages or data server outages).
All subsequent network requests to DataServer S by Process A are rejected with <NETWORK> errors until Process A explicitly executes a TRollback command.
Additionally, if any process on AppServer C completes a network request to DataServer S between the rollback on DataServer S and the TCommit of Transaction T (AppServer C finds out about the rollback-only condition before the TCommit), Transaction T is guaranteed to roll back on all data servers that are part of Transaction T.
ECP Transaction Recovery Guarantee
An data server crashes in the middle of an application server transaction, restarts, and completes recovery within the application server recovery timeout interval.
The transaction can be completed normally without violating any of the described guarantees. The data server does not perform any data operations that violate the ordering constraints defined by lock semantics. The only exception is the $Increment function (see the ECP and Clusters $Increment Limitation section for more information). Any transactions that cannot be recovered are rolled back in a way that preserves lock semantics.
InterSystems IRIS expects but does not guarantee that in the absence of continuing faults (whether in the network, the data server, or the application server hardware or software), all or most of the transactions pending into a data server at the time of a data server outage are recovered.
ECP Lock Recovery Guarantee
DataServer S has an unplanned shutdown, restarts, and completes recovery within the recovery interval.
The ECP Lock Guarantee still applies as long as all the modified data is journaled. If data is not being journaled, updates made to the data server before the crash can disappear without notice to the application server. InterSystems IRIS no longer guarantees that a process that acquires the lock sees all the updates that were made earlier by other processes while holding the lock.
If DataServer S shuts down gracefully, restarts, and completes recovery within the recovery interval, the ECP Lock Guarantee still applies whether or not data is being journaled.
Updates that are part of a transaction are always journaled; the ECP Transaction Recovery Guarantee applies in a stronger form. Other updates may or may not be journaled, depending on whether or not the destination global in the destination database is marked for journaling.
$Increment Ordering Guarantee
The $Increment function induces a loose ordering on a series of Set and Kill operations from separate processes, even if those operations are not protected by a lock.
For example: Process A performs some Set and Kill operations on DataServer S and performs a $Increment operation to a global ^x on DataServer S. Process B performs a subsequent $Increment to the same global ^x. Any process, including Process B, that sees the result of Process B incrementing ^x, sees all changes on DataServer S that Process A made before incrementing ^x.
See the ECP and Clusters $Increment Limitation section for more information.
ECP Sync Method Guarantee
Process A updates a global located on Data Server S, and issues a $system.ECP.Sync() call to S. Process B then issues a $system.ECP.Sync() to S. Process B can see all updates performed by Process A on Data Server S prior to its $system.ECP.Sync() call.
$system.ECP.Sync() is relevant only for processes running on an application server. If either process A or B are running on DataServer S itself, then that process does not need to issue a $system.ECP.Sync(). If both are running on DataServer S then neither needs $system.ECP.Sync, and this is simply the statement that global updates are immediately visible to processes running on the same server.
$system.ECP.Sync() does not guarantee durability; see the Dirty Data Reads in ECP without Locking limitation.
ECP Recovery Limitations
During the recovery of an ECP-configured system, there are the following limitations to the InterSystems IRIS guarantees:
ECP and Clusters $Increment Limitation
If a data server crashes while the application server has a $Increment request outstanding to the data server and the global is journaled, InterSystems IRIS attempts to recover the $Increment results from the journal; it does not re-increment the reference.
ECP Cache Liveness Limitation
In the absence of continuing faults, application servers observe data that is no more than a few seconds out of date, but this is not guaranteed. Specifically, if an ECP connection to the data server becomes nonfunctional (network problems, data server shutdown, data server backup operation, and so on), the user process may observe data that is arbitrarily stale, up to an application server connection-timeout value. To ensure that data is not stale, use the Lock command around the data-fetch operation, or use $system.ECP.Sync. Any network request that makes a round trip to the data server updates the contents of the application server ECP network cache.
ECP Routine Revalidation Limitation
If an application server downloads routines from a data server and the data server restarts (planned or unplanned), the routines downloaded from the data server are marked as if they had been edited.
Additionally, if the connection to the data server suffers a network outage (neither application server nor data server shuts down), the routines downloaded from the data server are marked as if they had been edited. In some cases, this behavior causes spurious <EDITED> errors as well as <ERRTRAP> errors.
Conflicting, Non-Locked Change Breaks Rollback
In InterSystems IRIS, the Lock command is only advisory. If Process A starts a transaction that is updating global ^x under protection of a lock on global ^y, and another process modifies ^x without the protection of a lock on ^y, the rollback of ^x does not work.
On the rollback of Set and Kill operations, if the current value of the data item is what the operation set it to, the value is reset to what it was before the operation. If the current value is different from what the specific Set or Kill operation set it to, the current value is left unchanged.
If a data item is sometimes modified inside a transaction, and sometimes modified outside of a transaction and outside the protection of a Lock command, rollback is not guaranteed to work. To be effective, locks must be used everywhere a data item is modified.
Journal Discontinuity Breaks Rollback
Rollback depends on the reliability and completeness of the journal. If something interrupts the continuity of the journal data, rollbacks do not succeed past the discontinuity. InterSystems IRIS silently ignores this type of transaction rollback.
A journal discontinuity can be caused by executing ^JRNSTOP while InterSystems IRIS is running, by deleting the Write Image Journal (WIJ) file after an InterSystems IRIS shutdown and before restart, or by an I/O error during journaling on a system that is not set to freeze the system on journal errors.
ECP Can Miss Error After Recovery
A Set or Kill operation completes on a data server, but receives an error. The data server crashes after completing that packet, but before delivering that packet to the application server system.
ECP recovery does not replay this packet, but the application server has not found out about the error; resulting in the application server missing Set or Kill operations on the data server.
Partial Set or Kill Leads to Journal Mismatch
There are certain cases where a Set or Kill operation can be journaled successfully, but receive an error before actually modifying the database. Given the particular way rollback of a data item is defined, this should not ever break transaction rollback; but the state of a database after a journal restore may not match the state of that database before the restore.
Loose Ordering in Cluster Failover or Restore
Cluster dejournaling is loosely ordered. The journal files from the separate cluster members are only synchronized wherever a lock, a $Increment, or a journal marker event occurs. This affects the database state after either a cluster failover or a cluster crash where the entire cluster must be brought down and restored. The database may be restored to a state that is different from the state just before the crash. The $Increment Ordering Guarantee places additional constraints on how different the restored database can be from its original form before the crash.
Process B’s ability to view the data modified by Process A does not ensure that Set operations from Process B are restored after the Set operations from Process A. Only a Lock or a $Increment operation can ensure proper ordering of competing Set and Kill operations from two different processes during cluster failover or cluster recovery.
Dirty Data Reads When Cluster Member Crashes
A cluster Member A completes updates in Transaction T1, and that system commits that transaction, but in non-synchronous transaction commit mode. Transaction T2 on a different cluster Member B acquires the locks once owned by Transaction T1. Cluster Member A crashes before all the information from Transaction T1 is written to disk.
Transaction T1 is rolled back as part of cluster failover. However, Transaction T2 on Member B could have seen data from Transaction T1 that later was rolled back as part of cluster failover, despite following the rules of the locking protocol. Additionally, if Transaction T2 has modified some of the same data items as Transaction T1, the rollback of Transaction T1 may fail because only some of the transaction data has rolled back.
A workaround is to use synchronous commit mode for transactions on cluster Member A. When using synchronous commit mode, Transaction T1 is durable on disk before its locks are released, so Transaction T1 is not rolled back once the application sees that it is complete.
Dirty Data Reads in ECP Without Locking
If an incoming ECP transaction reads data without locking, it may see data that is not durable on disk which may disappear if the data server crashes. It can only see such data when the data location is set by other ECP connections or by the local data server system itself. It can never see nondurable data that is set by this connection itself. There is no possibility of seeing nondurable data when locking is used both in the process reading the data and the process writing the data. This is a violation of the In-order Updates Guarantee and there is no easy workaround other than to use locking.
Asynchronous TCommit Converts to Rollback
If the data server side of a transaction receives an asynchronous error condition, such as a <FILEFULL>, while updating a database, and the application server does not see that error until the TCommit, the transaction is automatically rolled back on the data server. However, rollbacks are synchronous while TCommit operations are usually asynchronous because the rollback will be changing blocks the application server should be notified of before the application server process surrenders any locks.
The data server and the database are fine, but on the application server if the locks get traded to another process he may see temporarily see data that is about to be rolled back. However, the application server does not usually do anything that causes asynchronous errors