Skip to main content

ICM Reference

Important:

As of release 2023.3 of InterSystems IRIS, InterSystems Cloud Manager (ICM) is deprecated; it will be removed from future versions.

This chapter provides detailed information about various aspects of ICM and its uses.

ICM Commands and Options

The first table that follows lists the commands that can be executed on the ICM command line. Each of the commands is covered in detail in the “Using ICM” chapter.

The second table lists the options that can be used with commands. Command-line options serve two purposes, as follows:

  • To provide required or optional arguments to commands. For example, to list the run state of only the InterSystems IRIS containers in your deployment, you could use this command:

    icm ps -container iris
    

    To execute a command to open a shell inside the container deployed on node ANDY-DM-TEST, you could use this command:

    icm exec -command bash -machine ANDY-DM-TEST -interactive
    
  • To override a field’s default or configuration file value, in one of two ways:

    • The -image, –namespace, and -iscpassword options can be used to override the values of the DockerImage, Namespace, and ISCPassword fields, respectively, in any command, including icm provision.

    • Following the provisioning phase, the -overrides option can be used to override the values of one or more fields for the current command only. For example, assume your defaults file includes the following fields:

      "DockerUsername": "prodriguez",
      "DockerPassword": "xxxxxxx",
      "DockerRegistry": "https://containers.intersystems.com",
      "DockerImage": "containers.intersystems.com/intersystems/iris:2022.1.0.223.0",
      

      When executing the icm provision command you could override the Dockermage field using the -image option, but not to use an image from a different registry, because you could not override the registry location and credentials. In the icm upgrade command, however, you could specify an image from a different registry by using the -overrides option to override all three fields, for example:

      icm upgrade -overrides '{"DockerUsername":"mwyszynska","DockerPassword":"xxxxxx",
        "DockerRegistry":"docker.io"}' -image docker.io/acme/iris:latest-em
      
      Note:

      When -overrides is used with the icm run, icm install, or icm upgrade commands to specify field values that are intended to persist, these should be updated in the instances.json file so they won't be reverted during a subsequent reprovisioning operation. Following the icm upgrade command above, for example, the DockerImage, DockerRegistry, DockerUsername, and DockerPassword fields should be updated in the instances file. (The -image, –namespace, and -iscpassword options automatically do this.)

Both tables include links to relevant text.

Note:

The command table does not list every option that can be used with each command, and the option table does not list every command that can include each option.

ICM Commands
Command Description Important Options

provision

Provisions host nodes

n/a

inventory

Lists provisioned host nodes

-machine, -role, -json, -options

unprovision

Destroys host nodes

-stateDir, -cleanup, -force
merge Merges infrastructure provisioned in separate regions or provider platforms into a new definitions file for multiregion or multiprovider deployment -options, -localPath

ssh

Executes an operating system command on one or more host nodes

-command, -machine, -role

scp

Copies a local file to one or more host nodes

-localPath, -remotePath, -machine, -role

run

Deploys a container on host nodes

-image, -container, -namespace, -options, -iscPassword, -command, -machine, -role, –override

ps

Displays run states of containers deployed on host nodes

-container, -json

stop

Stops containers on one or more host nodes

-container, -machine, -role

start

Starts containers on one or more host nodes

-container, -machine, -role

pull

Downloads an image to one or more host nodes

-image, -container, -machine, -role

rm

Deletes containers from one or more host nodes

-container, -machine, -role

upgrade

Replaces containers on one or more host nodes

-image, -container, -machine, -role, –override

exec

Executes an operating system command in one or more containers

-container, -command, -interactive, -options, -machine, -role

session

Opens an interactive session for an InterSystems IRIS instance in a container or executes an InterSystems IRIS ObjectScriptScript snippet on one or more instances

-namespace, -command, -interactive, -options, -machine, -role

cp

Copies a local file to one or more containers

-localPath, -remotePath, -machine, -role

sql

Executes a SQL statement on the InterSystems IRIS instance

-namespace, -command, -machine, -role
install Installs InterSystems IRIS instances from a kit in containerless mode -machine, -role, –override
uninstall Uninstalls InterSystems IRIS instances installed from a kit in containerless mode -machine, -role

docker

Executes a Docker command on one or more host nodes

-container, -machine, -role
ICM Command-Line Options
Option Description Default Described in
-help Display command usage information and ICM version   ---
-version Display ICM version   ---
-verbose Show execution detail false (can be used with any command)
-force Don't confirm before unprovisioning false Unprovision the Infrastructure
-cleanUp Delete state directory after unprovisioning false Unprovision the Infrastructure
-machine regexp Machine name pattern match used to specify the node or nodes for which the command is run (all) icm inventory, icm ssh, icm run, icm exec, icm session
-role role Role of the InterSystems IRIS instance or instances for which a command is run, for example DATA or AM (all) icm inventory, icm ssh, icm run, icm exec, icm session
-namespace namespace Namespace to create on deployed InterSystems IRIS instances and set as default execution namespace for the session and sql commands IRISCLUSTER The Definitions File, icm run, icm session, icm sql
-image image Docker image to deploy; must include repository name DockerImage value in definitions file icm run, icm upgrade
-override '{"field":"value",...} Field value(s) to override for this command. none ICM Commands and Options
-options options Docker options to include in the command none icm inventory, icm run, icm exec, icm session, Deploying Across Multiple Regions or Providers, Using ICM with Custom and Third-Party Containers
-container name Name of the container icm ps command: (all)

other commands: iris

icm run, icm ps
-command cmd Command or query to execute none icm ssh, icm run, icm exec, icm session, icm sql
-interactive Redirect input/output to console for the exec and ssh commands false icm ssh, icm exec, icm sql

,

-localPath path File or directory path on a node’s local file system (icm cp) or within the ICM container (icm scp) none icm cp, icm scp, Containerless Deployment, Remote Script Invocation
-remotePath path File or directory path within a container (icm cp) or on a node’s local file system (icm scp) /home/SSHUser (value of SSHUser field) icm cp, icm scp, Containerless Deployment, Remote Script Invocation
-iscPassword password Password for deployed InterSystems IRIS instances iscPassword value in configuration file icm run
-json Enable JSON response mode false Using JSON Mode
Important:

Use of the -verbose option, which is intended for debugging purposes only, may expose the value of iscPassword and other sensitive information, such as DockerPassword. When you use this option, you must either use the -force option as well or confirm that you want to use verbose mode before continuing.

ICM Configuration Parameters

These tables describe the fields you can include in the configuration files (see Configuration, State and Log Files in the “Essential ICM Elements” chapter and Define the Deployment in the “Using ICM chapter”) to provide ICM with the information it needs to execute provisioning and deployment tasks and management commands. To look up a parameter by name, use the alphabetical list, which includes links to the tables containing the parameter definitions.

General Parameters

The fields in the following table are all used with all cloud providers, and some are used with vSphere and Preexisting as well.

The two rightmost columns indicate whether each parameter is required in every deployment or optional, and whether it must be included (when used) in either defaults.json or definitions.json, is recommended for one file or the other, or can be used in either. For example,

  • A single deployment is always on a single selected provisioning platform (even if subsequently merged with another to create a multiprovider deployment), therefore the Provider parameter is required and must be in the defaults file.

  • Each node type must be specified but a deployment can include multiple node types, thus the Role parameter is required in each definition in the definitions file.

  • Because each node that runs InterSystems IRIS must have a license, but other nodes don’t need one, the LicenseKey setting is required and generally appears in the appropriate definitions in the definitions file.

  • At least one container must be deployed on each node in the deployment, but a single container may be deployed on all the nodes (for instance iris/iris-arm64 across a sharded cluster consisting of DATA nodes only) or different containers on different node types (iris/iris-arm64 on DM and AM, webgateway on WS, arbiter on AR in a distributed cache cluster). For this reason the DockerImage parameter is required and can appear in the defaults file, the definitions file, or both (to specify a default image but override it for one or more node types).

  • Like the image to be deployed, the size of the OS volume can be specified for all nodes in the defaults file, for one or more node types in the definitions file, or in both, but because it has a default it is optional.

Note:

If no default is listed for a parameter, it does not have one.

Parameter Definition Use is ... Config file
Provider Platform to provision infrastructure on; see Provisioning Platforms. required defaults

Label

Tag

Fields in naming scheme for provisioned cloud nodes: Label-Role-Tag-NNNN, for example ANDY-DATA-TEST-0001; should indicate ownership and purpose, to avoid conflicting with others. Multiple deployments should not share the same Label and Tag. Cannot contain dashes. required defaults
LicenseDir Location of InterSystems IRIS license keys staged in the ICM container and individually specified by the LicenseKey field (below); see InterSystems IRIS Licensing for ICM. required defaults
LicenseKey License key for the InterSystems IRIS instance on one or more provisioned DATA, COMPUTE, DM, AM, DS, or QS nodes, staged within the ICM container in the location specified by the LicenseDir field (above). In a configuration containing only DM and AM nodes, a standard license can be used; for all others (that is, sharded clusters), a sharding-enabled license is required. required definitions recommended
Region

(Azure equivalent: Location)

Geographical region of provider’s compute resources in which infrastructure is provisioned. For information on deploying a single configuration in more than one region, see Deploying Across Multiple Regions or Providers. Provider-specific information, including provider documentation: required defaults
Zone Availability zone within the specified region (see above) in which to locate a node or nodes to be provisioned. For information on deploying a single configuration in more than one zone, see Deploying Across Multiple Zones. Provider-specific information: required defaults

ZoneMap

When deploying across multiple zones (see Deploying Across Multiple Zones), specifies which nodes are deployed in which zones. Default: 0,1,2,...,255.

optional definitions
Mirror If true, InterSystems IRIS instances on DATA, DM, and DS nodes are deployed as mirrors; see Mirrored Configuration Requirements. Default: false. optional defaults
MirrorMap Determines mirror member types of mirrored DATA, DS, and DM nodes, enabling deployment of DR async mirror members; see Rules for Mirroring. Default: primary,backup; the term async can be added one or more times to this, for example primary,backup,async,async. optional definitions
ISCPassword Password that will be set for the predefined user accounts on the InterSystems IRIS instances on one or more provisioned nodes. Corresponding command-line option: -iscPassword. If both parameter and option are omitted, ICM prompts for the password. For more information see The icm run Command. optional defaults
Namespace Namespace to be created on deployed InterSystems IRIS instances. This namespace is the default namespace for the icm session and icm sql commands, and can also be specified or overridden by the command-line option -namespace. Default: IRISCLUSTER. optional defaults
DockerImage Docker image to be used for in deployment by icm run command. Must include the repository name (see RepositoriesOpens in a new tab in the Docker documentation). Can be specified for all nodes in defaults.json and optionally overridden for specific node definitions in definitions.json. Can also be specified or overridden using the command-line option -image. required  
DockerRegistry DNS name of the server hosting the Docker repository storing the image specified by DockerImage (see About RegistryOpens in a new tab in the Docker documentation). If not included, ICM uses Docker’s public registry at docker.comOpens in a new tab. For information about the InterSystems Container Registry (ICR), see Downloading the ICM Image in the “Using ICM” chapter. required defaults
DockerUsername Username to use along with DockerPassword (below) for logging in to the Docker repository specified in DockerImage (above) on the registry specified by DockerRegistry (above). Not required for public repositories. If not included and the repository specified by DockerImage is private, login fails. required defaults
DockerPassword Password to use along with DockerUsername (above) for logging in to the Docker registry. Not required for public repositories. If this field is not included and the repository specified by DockerImage is private. ICM prompts you (with masked input) for a password. (If the value of this field contains special characters such as $, |, (, and ), they must be escaped with two \ characters; for example, the password abc$def must be specified as abc\\$def.) required defaults
DockerVersion Version of Docker installed on provisioned nodes. The version in each /Samples/.../defaults.json is generally correct for the platform; however, if your organization uses a different version of Docker, you may want that version installed on the nodes instead.
Important:

Container images from InterSystems comply with the Open Container Initiative (OCIOpens in a new tab) specification, and are built using the Docker Enterprise Edition engine, which fully supports the OCI standard and allows for the images to be certifiedOpens in a new tab and featured in the Docker Hub registry.

InterSystems images are built and tested using the widely popular container Ubuntu operating system, and are therefore supported on any OCI-compliant runtime engine on Linux-based operating systems, both on premises and in public clouds.

optional defaults

DockerURL

URL of the Docker Enterprise Edition repository associated with your subscription or trial; when provided, triggers installation of Docker Enterprise Edition on provisioned nodes, instead of Docker Community Edition. For more information about Docker EE see Docker EnterpriseOpens in a new tab in the Docker documentation.

optional defaults
DockerInit If set to False, the Docker --init option is not passed to all containers other than InterSystems IRIS containers, as it is by default. Default: true. (The --init option is never passed to InterSystems IRIS containers.) optional defaults
Overlay Determines the Docker overlay network type; normally "weave", but may be set to "host" for development, performance, or debug purposes, or when deploying on a preexisting cluster. Default: weave (host when deploying on a preexisting cluster). For more information see Use overlay networksOpens in a new tab in the Docker documentation and How the Weave Net Docker Network Plugins WorkOpens in a new tab in the Weave documentation. optional defaults
DockerStorageDriver Determines the storage driver used by Docker (see Docker storage driversOpens in a new tab in the Docker documentation). Values include overlay2 (the default) and btrfs. If set to overlay2, FileSystem (see below) must be set to xfs; if set to btrfs, FileSystem must be set to btrfs.. optional defaults

FileSystem

Type of file system to use for persistent volumes on provisioned nodes. Valid values are xfs and btrfs. Default: xfs. If DockerStorageDriver (above) is set to overlay2, FileSystem must be set to xfs; if DockerStorageDriver is btrfs, FileSystem must be btrfs.

optional defaults recommended

OSVolumeSize

Size (in GB) of the OS volume for a node or nodes in the deployment. Default: 32. May be limited by or ignored in favor of settings specific to the applicable parameters specifying machine image or template, instance type, or OS volume type parameters (see Provider-Specific Parameters).

optional  

DataVolumeSize

WIJVolumeSize

Journal1VolumeSize

Journal2VolumeSize

Size (in GB) of the corresponding persistent storage volume to create for iris containers. For example, DataVolumeSize determines the size of the data volume. Default: 10, although DataVolumeSize must be at least 60 for Tencent deployments. May be limited by the applicable volume type parameter (see Provider-Specific Parameters). Each volume also has a corresponding device name parameter (for example, DataDeviceName; see Device Name Parameters) and mount point parameter (for example, DataMountPoint; see immediately below and Storage Volumes Mounted by ICM). optional  

DataMountPoint

WIJMountPoint

Journal1MountPoint

Journal2MountPoint

The location within iris containers at which the corresponding persistent volume is mounted. For example, DataMountPoint determines the location for the data volume. For more information, see Storage Volumes Mounted by ICM. Defaults: /irissys/{ data | wij | journal1j | journal2j }. Each volume also has a corresponding device name parameter (for example, DataDeviceName; see Device Name Parameters) and size parameter (for example, DataVolumeSize; see above).

optional  
Containerless If true, enables containerless mode, in which InterSystems IRIS is deployed from an installation kit rather than a container; see the appendix Containerless Deployment. Default: false. optional defaults
Role Role of the node or nodes to be provisioned by a given entry in the definitions file, for example DM or DATA; see ICM Node Types. required definitions
Count Number of nodes to provision from a given entry in the definitions file. Default: 1. required definitions
StartCount Numbering start for a particular node definition in the definitions file. For example, if the DS node definition includes "StartCount": "3", the first DS node provisioned is named Label-DS-Tag-0003. optional definitions
LoadBalancer If true in definitions of node type DATA, COMPUTE, AM, or WS, a predefined load balancer is automatically provisioned on providers AWS, GCP, Azure, and Tencent (see Predefined Load Balancer). If true in definitions of node type CN or VM, a generic load balancer is added if other parameters are included in the definition (see Generic Load Balancer). Default: false. optional definitions

AlternativeServers

Remote server selection algorithm for definitions of type WS (see Node Type: Web Server). Valid values are LoadBalancing and FailOver. Default: LoadBalancing.

optional definitions

ApplicationPath

Application path to create for definitions of type WS. Do not include a trailing slash.

optional definitions

IAMImage

InterSystems API Manager (IAM) image; no default.

optional definitions

PostgresImage

Postgres image (optional IAM component); default: postgres:11.6.

optional definitions

PrometheusImage

Prometheus image (System Alerting and Monitoring [SAM] component); default: prom/prometheus:v2.17.1.

optional definitions

AlertmanagerImage

Alertmanager image (SAM component); default: prom/alertmanager:v0.20.0.

optional definitions

GrafanaImage

Grafana image (SAM component); default: grafana/grafana:6.7.1.

optional definitions

NginxImage

Nginx image (SAM component); default: nginx:1.17.9-alpine.

optional definitions
UserCPF Configuration merge file to be used to customize the CPFs InterSystems IRIS instances during deployment (see Deploying with Customized InterSystems IRIS Configurations). optional  
SystemMode String to be shown in the masthead of the Management PortalOpens in a new tab of the InterSystems IRIS instances on one or more provisioned nodes. Certain values (LIVE, TEST, FAILOVER, DEVELOPMENT) trigger additional changes in appearance. Default: blank. This setting can also be specified by adding [Startup]/SystemMode to the configuration merge file (see previous entry). optional  

Security-related Parameters

The parameters in the following table are used to provide access and identify required files and information so that ICM can communicate securely with the provisioned nodes and deployed containers. They are all required, in the defaults file only.

Parameter Definition
   
Provider-specific credentials and account parameters; to see detailed instructions for obtaining the files and values, click the provider link
  • Provider-Specific – AWS

    Credentials: Path to a file containing the public/private keypair for an AWS account.

  • Provider-Specific – GCP

    Credentials: Path to a JSON file containing the service account key for a GCP account.

    Project: GCP project ID.

  • Provider-Specific – Azure

    SubscriptionId: A unique alphanumeric string that identifies a Microsoft Azure subscription.

    TenantId: A unique alphanumeric string that identifies the Azure Active Directory directory in which an application was created.

    UseMSI: If true, authenticates using a Managed Service Identity in place of ClientId and ClientSecret; default is false.

    ClientId, ClientSecret: Credentials identifying and providing access to an Azure application (if UseMSI is false).

  • Provider-Specific – Tencent

    SecretID, SecretKey: Unique alphanumeric strings that identify and provide access to a Tencent Cloud account.

  • Provider-Specific – vSphere

    VSphereUser, VSpherePassword: Credentials for vSphere operations.

SSHUser Nonroot account with sudo access used by ICM for access to provisioned nodes. Root of SSHUser’s home directory can be specified using the Home field. Required value is provider-specific, as follows:
  • AWS — As per AMI (see AMI parameter in AWS Parameters); usually ubuntu for Ubuntu images

  • GCP — At user's discretion

  • Azure — At user's discretion

  • Tencent — As per image (see ImageId parameter in Tencent Parameters)

  • vSphere — As per VM template (see Template parameter in vSphere Parameters)

  • Preexisting — See SSH in the appendix “Deploying on a Preexisting Cluster”

SSHPassword Initial password for the user specified by SSHUser. Required for marketplace Docker images and deployments of type vSphere, Azure, and PreExisting. This password is used only during provisioning, at the conclusion of which password logins are disabled.
SSHOnly If true, ICM does not attempt SSH password logins during provisioning, for providers vSphere and PreExisting only. Because this prevents ICM from logging in using a password, it requires that you stage your public SSH key (as specified by the SSHPublicKey field, below) on each node. Default: false.
SSHPublicKey Path within the ICM container of the public key of the SSH public/private key pair; required for all deployments. For provider AWS, must be in SSH2 format, for example:---- BEGIN SSH2 PUBLIC KEY --- AAAAB3NzaC1yc2EAAAABJQAAAQEAoa0 ---- BEGIN SSH2 PUBLIC KEY ---For other providers, must be in OpenSSH format, for example:ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAoa0
SSHPrivateKey Path within the ICM container of the private key of the SSH public private key pair; required for all deployments in RSA format, for example:-----BEGIN RSA PRIVATE KEY----- MIIEogIBAAKCAQEAoa0ex+JKzC2Nka1 -----END RSA PRIVATE KEY-----
TLSKeyDir Directory within the ICM container containing TLS keys used to establish secure connections to Docker, InterSystems Web Gateway, JDBC, and mirrored InterSystems IRIS databases, as follows:
  • ca.pem

  • cert.pem

  • key.pem

  • keycert.pem

  • server-cert.pem

  • server-key.pem

  • keystore.p12

  • truststore.jks

  • SSLConfig.properties

SSLConfig Path within the ICM container to an TLS configuration file used to establish secure JDBC connections. Default: If this parameter is not provided, ICM looks for a configuration file in /TLSKeyDir/SSLConfig.Properties (see previous entry).
PrivateSubnet If true, ICM deploys on an existing private subnet, or creates and deploys on a new private subnet, for use with a bastion host; see Deploying on a Private Network.
WeavePassword Password used to encrypt traffic over Weave Net; enable encryption by setting to a value other than null in the defaults file. Default: null.
net_vpc_cidr CIDR of the existing private network to deploy on; see Deploy Within an Existing Private Network.
net_subnet_cidr CIDR of an ICM node’s subnet within an existing private network.

Port and Protocol Parameters

Typically, the defaults for these parameters are sufficient. For information about two use cases in which you may need to specify some of these parameters, see Ports in the appendix “Using ICM with Custom and Third-Party Containers” and Ports in the appendix “Deploying on a Preexisting Cluster”.

Parameter Definition
ForwardPort Port to be forwarded by a given load balancer (both 'from' and 'to'). Defaults:
  • AM, DM, DATA, COMPUTE: SuperServerPort,WebServerPort (below)

  • WS: WebGatewayPort (below)

  • VM/CN: user-defined; must be included for a generic load balancer to be deployed

The value can be a comma-separated list of ports, as long as all use the same ForwardProtocol (below).

ForwardProtocol Protocol to be forwarded by a given load balancer. Value TCP is valid for all providers; additional protocols available on a per-provider basis.
  • DATA, COMPUTE, DM, AM: TCP

  • WS: TCP

  • VM/CN: user-defined; parameter must be included to deploy a generic load balancer

HealthCheckPort Port used to verify health of instances in the target pool. Defaults:
  • AM, DM, DATA, COMPUTE: WebServerPort (below)

  • WS: 80

  • VM/CN: user-defined; parameter must be included to deploy a generic load balancer

HealthCheckProtocol Protocol used to verify health of instances in the target pool. Defaults:
  • AM, DM, DATA, COMPUTE: HTTP

  • WS: TCP

  • VM/CN: user-defined; parameter must be included to deploy a generic load balancer

HealthCheckPath Path used to verify health of instances in the target pool. Defaults:
  • Nonmirrored DM/DATA, AM, COMPUTE: /csp/user/cache_status.cxw

  • Mirrored DM, DATA: /csp/user/mirror_status.cxw

  • WS: N/A (path not used for TCP health checks)

  • VM/CN: user-defined for HTTP health checks; parameter must be included to deploy a generic load balancer

ISCAgentPort * Port used by InterSystems IRIS ISC Agent. Default: 2188. If Containerless is false or absent and Overlay is set to weave (see General Parameters), this port is closed in the firewall.
SuperServerPort Port used by InterSystems IRIS Superserver. Default: 1972.
WebServerPort Port used by InterSystems IRIS Web Server/Management Portal. Default: 52773. Also used by the InterSystems Web Gateway instance on a WS node deployed in nonroot containerless mode.

WebGatewayPort

Port used by InterSystems IRIS Web Gateway. Default: 80 (webgateway, webgateway-nginx), 52773 (webgateway-lockeddown).

LicenseServerPort *

Port used by InterSystems IRIS License Server. Default: 4002.. If Containerless is false or absent and Overlay is set to weave (see General Parameters), this port is closed in the firewall.

* If ICM is in container mode (Containerless is false or absent) and Overlay is set to weave (see General Parameters), this port is closed in the node’s firewall.

CPF Parameters

When using a configuration merge file specified by the UserCPF property to customize the CPF of one or more InterSystems IRIS instances during deployment, as described in Deploying with Customized InterSystems IRIS Configuration Parameters, you cannot include certain CPF settings, because ICM needs to read their values before it adds them to the CPF at a later stage. You should therefore customize these settings by specifying the following parameters (described in General Parameters and Port and Protocol Parameters) in your configuration files:

Parameter

CPF Setting

WIJMountPoint

[config]/wijdir

Journal1MountPoint

[Journal]/CurrentDirectory

Journal2MountPoint

[Journal]/AlternateDirectory

SuperServerPort

[Startup]/DefaultPort

WebServerPort

[Startup]/WebServerPort

Note:

The value of the ICM LicenseServerPort field is taken from the [LicenseServers] block of the CPF, bound to the name of the configured license server (see InterSystems IRIS Licensing for ICM).

Provider-Specific Parameters

The tables in this section list parameters used by ICM that are specific to the various cloud providers. Some of these parameters are used with more than one provider; for example, the InstanceType, ElasticIP, and VPCId parameters can be used in both AWS and Tencent deployments. Some provider-specific parameters have different names but the same purpose, for example AMI and InstanceType for AWS, Image and MachineType for GCP, and ImageId and InstanceType for Tencent, whereas there are four Azure parameters corresponding to each of these.

Like the General Parameters table, the tables in this section indicate whether each parameter is required in every deployment or optional, and whether it must be included (when used) in either defaults.json or definitions.json, is recommended for one file or the other, or can be used in either. For examples of each type, see General Parameters.

Note:

For information about parameters used only for PreExisting deployments, see Definitions File for PreExisting in the appendix “Deploying on a Preexisting Cluster”.

Selecting Machine Images

Cloud providers operate data centers in various regions of the world, so one of the important things to customize for your deployment is the region in which your cluster will be deployed (see the Region parameter in General Parameters). Another choice is which virtual machine images to use for the host nodes in your cluster (parameters vary by provider). Although the sample configuration files define valid regions and machine images for all cloud providers, you will generally want to change the region to match your own location. Because machine images are often specific to a region, both must be selected.

Container images from InterSystems comply with the Open Container Initiative (OCIOpens in a new tab) specification, and are built using the Docker Enterprise Edition engine, which fully supports the OCI standard and allows for the images to be certifiedOpens in a new tab and featured in the Docker Hub registry. InterSystems images are built and tested using the widely popular container Ubuntu operating system, and ICM therefore supports their deployment on any OCI-compliant runtime engine on Linux-based operating systems, both on premises and in public clouds.

Provider-Specific Parameter Tables

Parameter Definition Use is ... Config file
Credentials

Path to a file containing the public/private keypair for an AWS account. To download, after logging in to the AWS management console, open Managing Access Keys for IAM UsersOpens in a new tab in the AWS documentation and follow the procedure for managing access keys in the AWS console.

required defaults
AMI

AMI (machine image) to use as platform and OS template for nodes to be provisioned; see Amazon Machine Images (AMI)Opens in a new tab in the AWS documentation. Example: ami-a540a5e1. To list public AMIs available, in the EC2 Console, select AMIs in the navigation pane and filter for Public AMIs.

required  
InstanceType Instance type to use as compute resources template for nodes to be provisioned on AWS and Tencent; see Amazon EC2 Instance TypesOpens in a new tab in the AWS documentation. Example: m4.large. (Some instance types may not be compatible with some AMIs.) required  
ElasticIP Enables the Elastic IP feature on AWS and Tencent to preserve IP address and domain name across host node restart (see Host Node Restart and Recovery). Default: false. optional defaults

VPCId

Existing Virtual Private Cloud (VPC) to be used in the deployment on AWS and Tencent, instead of allocating a new one; the specified VPC is not deallocated during unprovision. If not specified when PrivateSubnet (see Security-related Parameters) is true, a new VPC is allocated for the deployment and deallocated during unprovision. For more information, see Deploying Within an Existing Private Network.

Note:

Internal parameter net_subnet_cidr must be provided if the VPC is not created in the default address space 10.0.%d.0/24; for example, for a VPC in the range 172.17.0.0/24, you would need to specify net_subnet_cidr as 172.17.%d.0/24.

optional defaults

SubnetIds

When deploying on an existing private subnet on AWS or Tencent, comma-separated list of subnet IDs, one for each element specified by the Zone parameter (see General Parameters).

optional defaults
RouteTableId When deploying on an existing private subnet, the route table to use for access to the ICM host; if provided, ICM uses this instead of allocating its own (and does not deallocate during unprovision). No default. optional defaults
InternetGatewayId When deploying on an existing private subnet, the Internet gateway to use for access to the ICM host; if provided, ICM uses this instead of allocating its own (and does not deallocate during unprovision). No default. optional defaults
OSVolumeType Determines disk type of the OS volume for a node or nodes in the deployment, which in turn determines the maximum value for the OSVolumeSize parameter (see General Parameters), which sets the size of the OS volume. See Amazon EBS Volume TypesOpens in a new tab in the AWS documentation. Tencent uses the same parameter name. Default: standard. optional  

DataVolumeType

WIJVolumeType

Journal1VolumeType

Journal2VolumeType

Determines disk type of the corresponding persistent storage volume for iris containers (see Storage Volumes Mounted by ICM), which in turn determines the maximum size of the volume. For example, DataVolumeType determines the maximum value for the DataVolumeSize parameter (see General Parameters), which detemines the size of the data volume. See Amazon EBS Volume TypesOpens in a new tab in the AWS documentation. Tencent uses the same parameter name. Default: standard. optional  

OSVolumeIOPS

Determines IOPS count for the OS volume for a node or nodes in the deployment; see I/O Characteristics and MonitoringOpens in a new tab in the AWS documentation. Default: 0. optional  

PlacementGroups

A comma-separated list of placement groups to create (see Placement groupsOpens in a new tab in the AWS documentation). If blank or omitted, no placement groups are created. Default: none.

optional  

PlacementStrategy

Strategy for placing instances in the groups specified by PlacementGroups. Valid values are cluster, partition, and spread. Default: cluster.

optional  

PlacementMap

Specifies the mapping between the values of PlacementGroups and the nodes within a given definition. Instances will be assigned in the order in which they occur in PlacementGroups (with wraparound). Default: 0,1,2,3,...,256.

optional  
PlacementPartitionCount The number of partitions to create in the placement group. Has no effect unless PlacementStrategy is set to partition. Default: 2 optional  
PlacementSpreadLevel Places a group of instances on distinct hardware. Has no effect unless PlacementStrategy is set to spread. Valid values are rack and host. Default: none optional  

DataVolumeIOPS

WIJVolumeIOPS

Journal1VolumeIOPS

Journal2VolumeIOPS

Determines IOPS count for the corresponding persistent storage volume for iris containers (see Storage Volumes Mounted by ICM). For example, DataVolumeIOPS determines the IOPS count for the data volume. See I/O Characteristics and MonitoringOpens in a new tab in the AWS documentation. Must be nonzero when the corresponding volume type (see the immediately preceding) is io1. Default: 0.

optional  

LoadBalancerInternal

When set to True, creates a load balancer of type "internal", otherwise the load balancer type is "external". Default: False.

optional definitions
Parameter Definition Use is ... Config file
Credentials

Path to a JSON file containing the service account key for a GCP account. To download, after logging in to the GCP console and selecting a project, open Creating and managing service account keysOpens in a new tab in the GCP documentation and follow the procedure for creating service account keys in the GCP console.

required defaults
Project GCP project ID; see Creating and Managing ProjectsOpens in a new tab in the GCP documentation. required defaults
Image Source machine image to use as platform and OS template for provisioned nodes; see ImagesOpens in a new tab in the GCP documentation. Example: ubuntu-os-cloud/ubuntu-1804-bionic-v20190911. required  
MachineType Machine type to use as compute resources template for nodes to be provisioned; see Machine typesOpens in a new tab in the GCP documentation. Example: n1-standard-1. required  
RegionMap

When deploying across multiple regions (see Deploying Across Multiple Regions on GCP), specifies which nodes are deployed in which regions. Default: 0,1,2,...,255.

optional definitions
Network

Existing Virtual Private Cloud (VPC) to be used in the deployment, instead of allocating a new one; the specified VPC is not deallocated during unprovision. If not specified when PrivateSubnet (see Security-related Parameters) is true, a new VPC is allocated for the deployment and deallocated during unprovision. For more information, see Deploying Within an Existing Private Network.

optional defaults
Subnet Existing private subnet to be used in the deployment, instead of allocating a new one; not deallocated during unprovision. For multiregion deployments (see Deploying Across Multiple Regions on GCP), value must be a comma-separated list, one for each region specified. If not specified when PrivateSubnet (see Security-related Parameters) is true, a new VPC is allocated for the deployment and deallocated during unprovision. For more information, see Deploying Within an Existing Private Network. optional defaults
OSVolumeType Determines disk type for the OS volume for a node or nodes in the deployment; see Storage OptionsOpens in a new tab in the GCP documentation. Default: pd-standard. optional  

DockerVolumeType

Determines disk type for the block storage device used for the Docker thin pool on a node or nodes in the deployment; see Storage OptionsOpens in a new tab in the GCP documentation. Default: pd-standard. optional  

DataVolumeType

WIJVolumeType

Journal1VolumeType

Journal2VolumeType

Determines disk type for the corresponding persistent storage volume for iris containers (see Storage Volumes Mounted by ICM). For example, DataVolumeType determines the disk type for the data volume. See Storage OptionsOpens in a new tab in the GCP documentation. Default: pd-standard. optional  
Parameter Definition Use is ... Config file
SubscriptionId A unique alphanumeric string that identifies a Microsoft Azure subscription; to display, on the Azure portal select Subscriptions or type “subscriptions” into the search box, and use the Subscription ID displayed for SubscriptionId. required defaults
TenantId A unique alphanumeric string that identifies the Azure Active Directory directory in which an application was created; to display, on the Azure portal select Azure Active Directory in the nav pane and then Properties on the nav pane for that page, and use the Directory ID displayed for TenantId. required defaults

UseMSI

If true, authenticates using a Managed Service Identity in place of ClientId and ClientSecret; see What is managed identities for Azure resources?Opens in a new tab in the Azure documentation. Requires that ICM be run from a machine in Azure. required defaults

ClientId

ClientSecret

Credentials identifying and providing access to an Azure application (if UseMSI is false); to create them:

required defaults
Location Region in which to provision a node or nodes; see the Region parameter in General Parameters. required defaults
LocationMap

When deploying across multiple regions (see Deploying Across Multiple Regions on Azure), specifies which nodes are deployed in which regions. Default: 0,1,2,...,255.

optional definitions
PublisherName Entity providing a given Azure machine image to use as platform and OS template for provisioned nodes. Example: OpenLogic. required  
Offer Operating system of a given Azure machine image. Example: UbuntuServer. required  
Sku Major version of the operating system of a given Azure machine image. Example: 7.2. required  
Version Build version of a given Azure machine image. Example: 7.2.20170105. required  

CustomImage

Image to be used to create the OS disk, in place of the Azure machine image described by the PublisherName, Offer, Sku, and Version fields. Value is an Azure URI of the form:

/subscriptions/subscription/resourceGroups/resource_group/providers /Microsoft.Compute/images/image_name

optional  
Size Machine size to use as compute resources template for nodes to be provisioned; see Sizes for virtual machines in AzureOpens in a new tab in the Azure documentation. Example: Standard_DS1. required  

ResourceGroupName

Existing resource group to be used in the deployment, instead of allocating a new one; the specified group is not deallocated during unprovision. If not specified when PrivateSubnet (see Security-related Parameters) is true, a new resource group is allocated for the deployment and deallocated during unprovision. For more information, see Deploying Within an Existing Private Network.

optional defaults

VirtualNetworkName

Existing private subnet to be used in the deployment, instead of allocating a new one; not deallocated during unprovision. For multiregion deployments (see Deploying Across Multiple Regions on Azure), value must be a comma-separated list, one for each region specified. If not specified when PrivateSubnet (see Security-related Parameters) is true, a new VPC is allocated for the deployment and deallocated during unprovision. For more information, see Deploying Within an Existing Private Network.

Note:

The net_subnet_cidr parameter (see Security-related Parameters) must be provided if the network is not created in the default address space 10.0.%d.0/24.

optional defaults

SubnetName

Name of an existing subnet to be used in the deployment, instead of allocating a new one; not deallocated during unprovision. For multiregion deployments (see Deploying Across Multiple Regions on Azure), value must be a comma-separated list, one for each region specified. If not specified when PrivateSubnet (see Security-related Parameters) is true, a new subnet is allocated for the deployment and deallocated during unprovision.

Note:

When provisioning on a private network, unique SubnetName and net_subnet_cidr parameters must be provided for each entry in the definitions file (but ResourceGroupName and VirtualNetworkName remain in the defaults file). This includes the bastion host definition when deploying a bastion host (see Deploy on a Private Network Through a Bastion Host).

optional definitions

AccountTier

Storage account performance tier (see Azure storage account overviewOpens in a new tab in the Azure documentation); either HDD (Standard) or SSD (Premium).

optional  

AccountReplicationType

Storage account replication type: locally-redundant storage (LRS), geo-redundant storage (GRS), zone-redundant storage (ZRS), or read access geo-redundant storage (RAGRS).

optional  
Parameter Definition Use is ... Config file

SecretID

SecretKey

Unique alphanumeric strings that identify and provide access to a Tencent Cloud account. To download, open SignatureOpens in a new tab in the Tencent Cloud documentation and follow the procedure in “Applying for Security Credentials”.

required defaults

ImageId

Machine image to use as platform and OS template for provisioned nodes; see Image OverviewOpens in a new tab in the Tencent documentation. Example: img-pi0ii46r.

required (see below)  

OSName

If ImageId (above) is not provided, ICM searches for an image matching this field. Note that this field supports regexp. Default: ubuntu.

required (see above)  

InstanceFamily

Instance family from which to select instance type; if InstanceType (below) is not provided, ICM searches for an instance type matching InstanceFamily, CPUCoreCount, and MemorySize (below). Default: S3. required (see below)  

InstanceType

Instance type to use as compute resources template for nodes to be provisioned on AWS and Tencent; see Instance TypesOpens in a new tab in the Tencent documentation. Example: S2.MEDIUM4.

required (see above)  
ElasticIP Enables the Elastic IP feature on AWS and Tencent to preserve IP address and domain name across host node restart (see Host Node Restart and Recovery). Default: false. optional defaults

VPCId

Existing Virtual Private Cloud (VPC) to be used in the deployment on AWS and Tencent, instead of allocating a new one; the specified VPC is not deallocated during unprovision. If not specified when PrivateSubnet (see Security-related Parameters) is true, a new VPC is allocated for the deployment and deallocated during unprovision. For more information, see Deploying Within an Existing Private Network.

Note:

Internal parameter net_subnet_cidr must be provided if the VPC is not created in the default address space 10.0.%d.0/24; for example, for a VPC in the range 172.17.0.0/24, you would need to specify net_subnet_cidr as 172.17.%d.0/24.

optional defaults

SubnetIds

When deploying on an existing private subnet on AWS or Tencent, comma-separated list of subnet IDs, one for each element specified by the Zone parameter (see General Parameters).

optional defaults

CPUCoreCount

CPU core to match when selecting instance type; if InstanceType (above) is not provided, ICM searches for an instance type matching InstanceFamily, CPUCoreCount, and MemorySize (above). Default: 2. optional  

MemorySize

Memory size to match when selecting instance type; if InstanceType (above) is not provided, ICM searches for an instance type matching InstanceFamily, CPUCoreCount, and MemorySize (above). Default: 4 GB. optional  

OSVolumeType

Determines disk type for the OS volume for a node or nodes in the deployment; see Data Types: DataDiskOpens in a new tab in the Tencent documentation. AWS uses the same parameter name. Default: CLOUD_BASIC. optional  

DockerVolumeType

Determines disk type for the block storage device used for the Docker thin pool on a node or nodes in the deployment; see Data Types: DataDiskOpens in a new tab in the Tencent documentation. AWS uses the same parameter name. Default: CLOUD_BASIC. optional  

DataVolumeType

WIJVolumeType

Journal1VolumeType

Journal2VolumeType

Determines disk type for the corresponding persistent storage volume for iris containers (see Storage Volumes Mounted by ICM). For example, DataVolumeType determines the disk type for the data volume. AWS uses the same parameter names. See Data Types: DataDiskOpens in a new tab in the Tencent documentation. Default: CLOUD BASIC. optional  
Parameter Definition Use is ... Config file
Server Name of the vCenter server. Example: tbdvcenter.internal.acme.com. required defaults
Datacenter Name of the datacenter. required defaults
DatastoreCluster

Collection of datastores where virtual machine files will be stored; see Creating a Datastore ClusterOpens in a new tab in the VMware documentation. Example: DatastoreCluster1.

required defaults
DataStore If provided, specifies one datastore in the datastore cluster in which to store virtual machine files. Example: Datastore1 optional defaults
ComputeCluster Cluster of hosts used to manage compute resources, DRS, and HA. Example: ComputeCluster1 required defaults

VSphereUser

VSpherePassword

Credentials for vSphere operations; see About vSphere AuthenticationOpens in a new tab in the VMware documentation. required defaults
DNSServers List of DNS servers for the virtual network. Example: 172.16.96.1,172.17.15.53 required defaults
DNSSuffixes List of name resolution suffixes for the virtual network adapter. Example: internal.acme.com required defaults
Domain FQDN for a node or nodes to be provisioned. Example: internal.acme.com required defaults
NetworkInterface Label to assign to a network interface. Example: VM Network optional defaults

ResourcePool

Name of a vSphere resource pool; see Managing Resource PoolsOpens in a new tab in the VMware documentation. Example: ResourcePool1.

optional defaults
Template Virtual machine master copy (machine image) to use as platform and OS template for nodes to be provisioned. Example: ubuntu1804lts required  
VCPU Number of CPUs in a node or nodes to be provisioned. Example: 2. optional  
Memory Amount of memory (in MB) in a node or nodes to be provisioned. Example: 4096. optional  

GuestID

Guest ID for the operating system type. See Enum - VirtualMachineGuestOsIdentifierOpens in a new tab on the VMware support website. Default: other3xLinux64Guest.

optional  

WaitForGuestNetTimeout

Time (in minutes) to wait for an available IP address on a virtual machine. Default: 5.

optional  

ShutdownWaitTimeout

Time (in minutes) to wait for graceful guest shutdown when making necessary updates to a virtual machine. Default: 3.

optional  

MigrateWaitTimeout

Time (in minutes) to wait for virtual machine migration to complete. Default: 10.

optional  

CloneTimeout

Time (in minutes) to wait for virtual machine cloning to complete. Default: 30.

optional  

CustomizeTimeout

Time (in minutes) that Terraform waits for customization to complete. Default: 10.

optional  

DiskPolicy

Disk provisioning policy for the deployment (see About Virtual Disk Provisioning PoliciesOpens in a new tab in the VMware documentation). Values are:

  • thin — Thin Provision

  • lazy — Thick Provision Lazy Zeroed

  • eagerZeroedThick — Thick Provision Eager Zeroed

Default: lazy.

optional  

SDRSEnabled

If specified, determines whether Storage DRS (see Enable and Disable Storage DRSOpens in a new tab in the VMware documentation) is enabled for a virtual machine; otherwise, use current datastore cluster settings. Default: Current datastore cluster settings.

optional  

SDRSAutomationLevel

If specified, determines Storage DRS automation level for a virtual machine; otherwise, use current datastore cluster settings. Values are automated or manual. Default: Current datastore cluster settings.

optional  

SDRSIntraVMAffinity

If provided, determines Intra-VM affinity setting for a virtual machine (see Override VMDK Affinity RulesOpens in a new tab in the VMware documentation); otherwise, use current datastore cluster settings. Values include:

  • true — All disks for this virtual machine will be kept on the same datastore.

  • false — Storage DRS may locate individual disks on different datastores if it helps satisfy cluster requirements.

Default: Current datastore cluster settings.

optional  

SCSIControllerCount

Number of SCSI controllers for a given host node; must be between 1 and 4. The OS volume is always be placed on the first SCSI controller. vSphere may not be able to create more SCSI controllers than were present in the template specified by the Template field.

Default: 1

optional  

DockerVolumeSCSIController

SCSI controller on which to place the Docker volume. Must be between 1 and 4 and may not exceed SCSIControllerCount.

Default: 1

optional  

DataVolumeSCSIController

WIJVolumeSCSIController

Journal1VolumeSCSIController

Journal2VolumeSCSIController

SCSI controller on which to place the corresponding volume in iris containers; for example, DataVolumeSCSIController determines the controller for data volume. Must be between 1 and 4 and may not exceed SCSIControllerCount.

Default: 1

optional  
Note:

The requirements for the VMware vSphere template specified by the Template property are similar to those described in Host Node Requirements in the appendix “Deploying on a Preexisting Cluster” (for example, passwordless sudo access).

To address the needs of the many users who rely on VMware vSphere, it is supported by this release of ICM. Depending on your particular vSphere configuration and underlying hardware platform, the use of ICM to provision virtual machines may entail additional extensions and adjustments not covered in this guide, especially for larger and more complex deployments, and may not be suitable for production use. Full support is expected in a later release.

Device Name Parameters

The parameters listed in the following specify the device files under /dev that represent the persistent volumes created by ICM for use by InterSystems IRIS. For information about these persistent volumes and a table of provider and OS-specific default values for these parameters, see Storage Volumes Mounted by ICM. For PreExisting deployments, see Storage Volumes in the “Deploying on a Preexisting Cluster” appendix.

Parameter Persistent Volume For

DataDeviceName

Databases

WIJDeviceName

WIJ directory

Journal1DeviceName

Primary journal directory

Journal2DeviceName

Alternate journal directory

Alphabetical List of User Parameters

The following table lists all of the parameters discussed in the preceding tables in this section in alphabetical order, with links to the table(s) containing their definition.

Parameter Table(s) for definition

AccountReplicationType

Provider-Specific – Azure

AccountTier

Provider-Specific – Azure

AlternativeServers

General

AMI

Provider-Specific – AWS

ApplicationPath

General

ClientId

Provider-Specific – Azure, Security

ClientSecret

Provider-Specific – Azure, Security

CloneTimeout

Provider-Specific – vSphere

ComputeCluster

Provider-Specific – vSphere

Count

General

CPUCoreCount

Provider-Specific – Tencent

Credentials

Provider-Specific – AWS, Provider-Specific – GCP, Security

CustomizeTimeout

Provider-Specific – vSphere

Datacenter

Provider-Specific – vSphere

DataDeviceName

Device Name

DataMountPoint

General

Datastore

Provider-Specific – vSphere

DatastoreCluster

Provider-Specific – vSphere

DataVolumeIOPS

Provider-Specific – AWS

DataVolumeSCSIController

Provider-Specific – vSphere

DataVolumeSize

General

DataVolumeType

Provider-Specific – AWS, Provider-Specific – GCP, Provider-Specific – Tencent

DiskPolicy

Provider-Specific – vSphere

DNSName

PreExisting

DNSServers

Provider-Specific – vSphere

DNSSuffixes

Provider-Specific – vSphere

DockerImage

General

DockerInit

General

DockerPassword

General

DockerRegistry

General

DockerStorageDriver

General

DockerURL

General

DockerUsername

General

DockerVersion

General

DockerVolumeIOPS

Provider-Specific – AWS

DockerVolumeSCSIController

Provider-Specific – vSphere

DockerVolumeSize

General

DockerVolumeType

Provider-Specific – AWS, Provider-Specific – GCP, Provider-Specific – Tencent

Domain

Provider-Specific – vSphere

ElasticIP

Provider-Specific – AWS, Provider-Specific – Tencent

FileSystem

General

GuestID

Provider-Specific – vSphere

Image

Provider-Specific – GCP

ImageId

Provider-Specific – Tencent

InstanceFamily

Provider-Specific – Tencent

InstanceType

Provider-Specific – AWS, Provider-Specific – Tencent

InternetGatewayId

Provider-Specific – AWS

IPAdress

PreExisting

ISCPassword

General

Journal1DeviceName

Device Name

Journal1MountPoint

General, CPF

Journal1VolumeIOPS

Provider-Specific – AWS

Journal1VolumeSCSIController

Provider-Specific – vSphere

Journal1VolumeSize

General

Journal1VolumeType

Provider-Specific – AWS, Provider-Specific – GCP, Provider-Specific – Tencent

Journal2DeviceName

Device Name

Journal2MountPoint

General, CPF

Journal2VolumeIOPS

Provider-Specific – AWS

Journal2VolumeSCSIController

Provider-Specific – vSphere

Journal2VolumeSize

General

Journal2VolumeType

Provider-Specific – AWS, Provider-Specific – GCP, Provider-Specific – Tencent

Label

General

LicenseDir

General

LicenseKey

General

LicenseServerPort

Port, CPF

LoadBalancer

General

LoadBalancerInternal

Provider-Specific – AWS

Location

Provider-Specific – Azure

LocationMap

Provider-Specific – Azure

MachineType

Provider-Specific – GCP

Memory

Provider-Specific – vSphere

MemorySize

Provider-Specific – Tencent

MigrateWaitTimeout

Provider-Specific – vSphere

Mirror

General

MirrorMap

General

Namespace

General

NetworkInterface

Provider-Specific – vSphere

OSName

Provider-Specific – Tencent

OSVolumeIOPS

Provider-Specific – AWS

OSVolumeSize

General

OSVolumeType

Provider-Specific – AWS, Provider-Specific – GCP, Provider-Specific – Tencent

Overlay

General

PlacementGroups

Provider-Specific – AWS

PlacementStrategy

Provider-Specific – AWS

PlacementMap

Provider-Specific – AWS
PlacementPartitionCount Provider-Specific – AWS
PlacementSpreadLevel Provider-Specific – AWS

Project

Provider-Specific – GCP

Provider

General

ProxyImage

General

Region

General

RegionMap Provider-Specific – GCP

ResourceGroupName

Provider-Specific – Azure

ResourcePool

Provider-Specific – vSphere

Role

General

RouteTableId

Provider-Specific – AWS

SCSIControllerCount

Provider-Specific – vSphere

SDRSAutomationLevel

Provider-Specific – vSphere

SDRSEnabled

Provider-Specific – vSphere

SDRSIntraVMAffinity

Provider-Specific – vSphere

SecretID

Provider-Specific – Tencent, Security

SecretKey

Provider-Specific – Tencent, Security

Server

Provider-Specific – vSphere

ShutdownWaitTimeout

Provider-Specific – vSphere

Size

Provider-Specific – Azure

SSHOnly

Security

SSHPassword

Security

SSHPrivateKey

Security

SSHPublicKey

Security

SSHUser

Security

SSLConfig

Security

StartCount

General

SubnetName

Provider-Specific – Azure

SubnetIds

Provider-Specific – AWS, Provider-Specific – Tencent

SubscriptionId

Security

SuperServerPort

Port, CPF

SystemMode

General

Tag

General

Template

Provider-Specific – vSphere

TenantId

Security

TLSKeyDir

Security

UseMSI

Provider-Specific – Azure, Security

UserCPF

General

VCPU

Provider-Specific – vSphere

VirtualNetworkName

Provider-Specific – Azure

VPCId

Provider-Specific – AWS, Provider-Specific – Tencent

VspherePassword

Provider-Specific – vSphere, Security

VsphereUser

Provider-Specific – vSphere, Security

WaitForGuestNetTimeout

Provider-Specific – vSphere

WeavePassword Security

WebGatewayPort

Port

WebServerPort

Port, CPF

WIJDeviceName

Device Name

WIJMountPoint

General, CPF

WIJVolumeIOPS

Provider-Specific – AWS

WIJVolumeSCSIController

Provider-Specific – vSphere

WIJVolumeSize

General

WIJVolumeType

Provider-Specific – AWS, Provider-Specific – GCP, Provider-Specific – Tencent

Zone

General

ZoneMap

General

ICM Node Types

This section described the types of nodes that can be provisioned and deployed by ICM and their possible roles in the deployed InterSystems IRIS configuration. A provisioned node’s type is determined by the Role field.

The following table summarizes the detailed node type descriptions that follow.

ICM Node Types
Node Type Configuration Role(s) InterSystems Image to Deploy
DATA Sharded cluster data node iris (InterSystems IRIS instance)
COMPUTE Sharded cluster compute node iris (InterSystems IRIS instance)

DM

Distributed cache cluster data server

Stand-alone InterSystems IRIS instance

[namespace-level architecture: shard master data server]

iris (InterSystems IRIS instance)

DS

[namespace-level architecture: shard data server]

iris (InterSystems IRIS instance)

QS

[namespace-level architecture: shard query server]

iris (InterSystems IRIS instance)

AM

Distributed cache cluster application server

iris (InterSystems IRIS instance)

AR

Mirror arbiter

arbiter (InterSystems IRIS mirror arbiter)

WS

Web server

webgateway (InterSystems Web Gateway)
SAM System Alerting and Monitoring (SAM) node sam (InterSystems System Alerting and Monitoring application)

LB

Load balancer

VM

Virtual machine

CN Custom and third-party container node
BH Bastion host
Important:

The InterSystems images shown in the preceding table are required on the corresponding node types, and cannot be deployed on nodes to which they do not correspond. If the wrong InterSystems image is specified for a node by the DockerImage field or the -image option of the icm run command — for example, if the iris image is specified for an AR (arbiter) node, or any InterSystems image for a CN node — deployment fails, with an appropriate message from ICM. For a detailed discussion of the deployment of InterSystems images, see The icm run Command in the “Using ICM” chapter.

Note:

The above table includes sharded cluster roles for the namespace-level sharding architectureOpens in a new tab, as documented in previous versions of this guide. These roles (DM, DS, QS) remain available for use in ICM but cannot be combined with DATA or COMPUTE nodes in the same deployment.

Role DATA: Sharded Cluster Data Node

As described in Overview of InterSystems IRIS ShardingOpens in a new tab and other sections of the “Horizontally Scaling for Data Volume with Sharding” chapter of the Scalability Guide, a typical sharded cluster consists only of data nodes, across which the sharded data is partitioned, and therefore requires only a DATA node definition in the definitions.json file. If DATA nodes are defined, the deployment must be a sharded cluster, and the only other node type that can be defined with them is COMPUTE.

DATA nodes can be mirrored if provisioned in a number matching the MirrorMap setting in their definition, as described in Rules for Mirroring. The DATA nodes in a cluster must be either all mirrored or all nonmirrored.

The only distinction between data nodes in a sharded cluster is that the first node configured (known as node 1) stores all of the nonsharded data, metadata, and code for the cluster in addition to its share of the sharded data. The difference in storage requirements, however, is typically very small. Because all data, metadata, and code is visible on any node in the cluster, application connections can be load balanced across all of the nodes to take greatest advantage of parallel query processing and partitioned caching. A load balancer may be assigned to DATA nodes; see Role LB: Load Balancer.

Role COMPUTE: Sharded Cluster Compute Node

For advanced use cases in which extremely low query latencies are required, potentially at odds with a constant influx of data, compute nodes can be added to a sharded cluster to provide a transparent caching layer for servicing queries, separating the query and data ingestion workloads and improving the performance of both. (For more information see Deploy Compute Nodes for Workload Separation and Increased Query ThroughputOpens in a new tab in the Scalability Guide.)

Adding compute nodes yields significant performance improvement only when there is at least one compute node per data node, so you should define at least as many COMPUTE nodes as DATA nodes; if the number of DATA nodes in the definitions file is greater than the number of COMPUTE nodes, ICM issues a warning. Configuring multiple compute nodes per data node can further improve the cluster’s query throughput, and the recommended best practice when doing so is to configure the same number of compute nodes for each data node, so ICM distributes the defined COMPUTE nodes as evenly as possible across the DATA nodes.

Because COMPUTE nodes support query execution only and do not store any data, their instance type and other settings can be tailored to suit those needs, for example by emphasizing memory and CPU and keeping storage to the bare minimum. Because they do not store data, COMPUTE nodes cannot be mirrored.

A load balancer may be assigned to COMPUTE nodes; see Role LB: Load Balancer.

Role DM: Distributed Cache Cluster Data Server, Standalone Instance, Shard Master Data Server

If multiple nodes of role AM and a DM node (nonmirrored or mirrored) are specified, they are deployed as an InterSystems IRIS distributed cache cluster, with the former serving as application servers and the latter as an data server.

A node of role DM (nonmirrored or mirrored) deployed by itself becomes a standalone InterSystems IRIS instance.

If a DM node (mirrored or nonmirrored), DS nodes (mirrored or nonmirrored), and (optionally) QS nodes are specified, they are deployed as a namespace-levelOpens in a new tab sharded cluster.

Role DS: Shard Data Server

Under the namespace-levelOpens in a new tab architecture, a data shard stores one horizontal partition of each sharded table loaded into a sharded cluster. A node hosting a data shard is called a shard data server. A cluster can have two or more shard data servers up to over 200. Shard data servers can be mirrored by deploying an even number and specifying mirroring.

Role QS: Shard Query Server

Under the namespace-level architecture, shard query servers provides query access to the data shards to which they are assigned, minimizing interference between query and data ingestion workloads and increasing the bandwidth of a sharded cluster for high volume multiuser query workloads. If shard query servers are deployed they are assigned round-robin to the deployed shard data servers. Shard query servers automatically redirect application connections when a mirrored shard data server fails over.

If QS nodes are defined but DS nodes are not, ICM responds with an arror like the following:

Shard Query Server (role 'QS') requires at least one Shard Data Server (role 'DS')

Role AM: Distributed Cache Cluster Application Server

If multiple nodes of role AM and a DM node are specified, they are deployed as an InterSystems IRIS distributed cache cluster, with the former serving as application servers and the latter as a data server. When the data server is mirrored, application connection redirection following failover is automatic.

A load balancer may be assigned to AM nodes; see Role LB: Load Balancer.

Role AR: Mirror Arbiter

When DATA nodes (sharded cluster DATA nodes), a DM node (distributed cache cluster data server, stand-alone InterSystems IRIS instance, or namespace-level shard master data server), or DS nodes (namespace-level shard data servers) are mirrored, deployment of an arbiter node to facilitate automatic failover is highly recommended. One arbiter node is sufficient for all of the mirrors in a cluster; multiple arbiters are not supported and are ignored by ICM, as are arbiter nodes in a nonmirrored cluster.

The AR node does not contain an InterSystems IRIS instance, using a different image to run an ISCAgent container. This arbiter image must be specified using the DockerImage field in the definitions file entry for the AR node; for more information, see The icm run Command.

For more information about the arbiter, see the “MirroringOpens in a new tab” chapter of the High Availability Guide.

Role WS: Web Server

A deployment may contain any number of web servers. Each web server node contains an InterSystems Web Gateway installation along with an Apache web server. ICM populates the remote server list in the InterSystems Web Gateway as follows:

  • If DATA and COMPUTE nodes are deployed (node-level sharded cluster), all of the DATA and COMPUTE nodes, or all of the DATA nodes if no COMPUTE nodes are deployed.

  • If AM nodes are deployed (distributed cache cluster), all of the AM nodes.

  • Otherwise, the DM node (standalone instance or namespace-level sharded cluster).

    Note:

    If deploying a namespace-level sharded cluster with a web server tier, you can manually deploy a custom or third-party load balancer to distribute connections across the DS and (if they exist) QS nodes of the cluster (as recommended in Deploying the Namespace-level ArchitectureOpens in a new tab in the Scalability Guide) and manually edit the Web Gateway configurations to populate the remote server lists with the load balancer address (for more information, see Mirrored Configurations, Failover, and Load BalancingOpens in a new tab in the Web Gateway Guide).

For mirrored DATA and DM nodes, a mirror-aware connection is created, and application connection redirection following failover is automatic. Communication between the web server and the remote servers is configured to run in TLS mode.

A load balancer may be assigned to WS nodes; see Role LB: Load Balancer.

The WS node does not contain an InterSystems IRIS instance, using a different image to run a Web Gateway container. As described in The icm run Command, the webgateway image can be specified by including the DockerImage field in the WS node definition in the definitions.json file, for example:

{
    "Role": "WS",
    "Count": "3",
    "DockerImage": "intersystems/webgateway:latest-em",
    "ApplicationPath": "/acme",
    "AlternativeServers": "LoadBalancing"
}

If the ApplicationPath field is provided, its value is used to create an application path for each instance of the Web Gateway. The default server for this application path is assigned round-robin across Web Gateway instances, with the remaining remote servers making up the alternative server pool. For example, if the preceding sample WS node definition were part of a distributed cache cluster with three AM nodes, the assignments would be like the following:

Instance

Default Server

Alternative Servers

Acme-WS-TEST-0001

Acme-AM-TEST-0001

Acme-AM-TEST-0002, Acme-AM-TEST-0003

Acme-WS-TEST-0002

Acme-AM-TEST-0002

Acme-AM-TEST-0001, Acme-AM-TEST-0003

Acme-WS-TEST-0003

Acme-AM-TEST-0003

Acme-AM-TEST-0001, Acme-AM-TEST-0002

The AlternativeServers field determines how the Web Gateway distributes requests to its target server pool. Valid values are LoadBalancing (the default) and FailOver. This field has no effect if the ApplicationPath field is not specified.

For information about using the InterSystems Web Gateway, see the Web Gateway GuideOpens in a new tab.

Role SAM: System Alerting and Monitoring Node

Defining a SAM node adds the System Alerting and Monitoring (SAM) cluster monitoring solution to a deployment. For information about adding SAM, see Monitoring in ICM; for complete information about SAM, see the System Alerting and Monitoring GuideOpens in a new tab.

Role LB: Load Balancer

ICM automatically provisions a predefined load balancer node when the provisioning platform is AWS, GCP, Azure, or Tencent, and the definition of nodes of type DATA, COMPUTE, DM, AM, or WS in the definitions file sets LoadBalancer to true. For a generic load balancer for VM or CN nodes, additional parameters must be provided.

Predefined Load Balancer

For nodes of role LB, ICM configures the ports and protocols to be forwarded as well as the corresponding health checks. Queries can be executed against the deployed load balancer the same way one would against a data node in a sharded cluster or a distributed cache cluster application server.

To add a load balancer to the definition of DATA, COMPUTE, DM, AM, or WS nodes, add the LoadBalancer field, for example:

{
    "Role": "AM",
    "Count": "2",
    "LoadBalancer": "true"
}

The following example illustrates the nodes that would be created and deployed given this definition:

$ icm inventory
Machine           IP Address    DNS Name                              Region   Zone
-------           ----------    --------                              ------   ----
ANDY-AM-TEST-0001 54.214.230.24 ec2-54-214-230-24.amazonaws.com       us-west1 c
ANDY-AM-TEST-0002 54.214.230.25 ec2-54-214-230-25.amazonaws.com       us-west1 c
ANDY-LB-TEST-0000 (virtual AM)  ANDY-AM-TEST-1546467861.amazonaws.com us-west1 c

Queries against this cluster can be executed against the load balancer the same way they would be against the AM nodes.

Predefined load balancers for mirrored DATA and DM nodes are mirror-aware and always direct traffic to the current primary.

The LoadBalancer field can be added to more than one definition in a deployment; for example a distributed cache cluster can contain AM nodes receiving load-balanced connections from a WS tier that receives load-balanced application connections.

Currently, a single automatically provisioned load balancer cannot serve multiple node types (for example, both DATA and COMPUTE nodes), so each requires its own load balancer. This does not preclude the user from manually deploying a custom or third-party load balancer to serve the desired roles. Another useful approach is to provision a load balancer for WS nodes, which can then distribute application connections across multiple node types as described in Role WS: Web Server.

Note:

When provisioning on AWS, you can specify a load balancer of type “internal” by setting LoadBalancerInternal to True in the definition in which LoadBalancer is set to True.

Generic Load Balancer

A load balancer can be added to VM (virtual machine) and CN (container) nodes by providing the following additional keys:

  • ForwardProtocol

  • ForwardPort

  • HealthCheckProtocol

  • HealthCheckPath

  • HealthCheckPort

The following is an example:

{
    "Role": "VM",
    "Count": "2",
    "LoadBalancer": "true",
    "ForwardProtocol": "tcp",
    "ForwardPort": "443",
    "HealthCheckProtocol": "http",
    "HealthCheckPath": "/csp/status.cxw",
    "HealthCheckPort": "8080"
}

ForwardPort can be a comma-separated list of ports to forward, with the condition that all of the forwarded ports share the same ForwardProtocol.

More information about these keys can be found in the Ports and Protocol Parameters table in “ICM Configuration Parameters”.

Load Balancer Notes

Load balancers on different cloud providers may behave differently; be sure to acquaint yourself with load balancer details on the platforms you provision on. In particular:

  • Some cloud providers create a DNS name for the load balancer that resolves to multiple IP addresses; for this reason, the value in the DNS Name column should be used. If a numeric IP address appears in the DNS Name column, it simply means that the given cloud provider assigns a unique IP address to their load balancer, but doesn't give it a DNS name.

  • Because the DNS name may not indicate to which resources a given load balancer applies, the IP Address column is used for this purpose.

  • Respective cloud providers may differ in how they respond to the case in which all members of the target pool fail their health check: On GCP, we observed the request being forwarded to a random target (whose underlying service may or may not be available), whereas on AWS we observed the load balancer reject the request.

For providers Vmware vSphere and PreExisting, you may wish to deploy a custom or third-party load balancer.

Avoid provisioning a load balancer for mirrored DATA nodes on Tencent; load balancers provisioned on Tencent are not currently able to determine which side of a mirrored DATA node is primary, which could result in errors performing read/write operations through the load balancer. 

Role VM: Virtual Machine Node

A cluster may contain any number of virtual machine nodes. A virtual machine node provides a means of allocating host nodes which do not have a predefined role within an InterSystems IRIS cluster. Docker is not installed on these nodes, though users are free to deploy whatever custom or third-party software (including Docker) they wish.

The following commands are supported on the virtual machine node:

A load balancer may be assigned to VM nodes; see Role LB: Load Balancer.

Role CN: Container Node

A cluster may contain any number of container nodes. A container node is a general purpose node with Docker installed.

You can add InterSystems API Manager (IAM) to any deployment by defining a CN node and including the IAM and IAMImage fields; for more information, see Deploying InterSystems API Manager (IAM) in the “ICM Reference” chapter. You can also deploy any custom and third-party containers you wish on a CN node; iris (InterSystems IRIS) containers will not be deployed if specified. All ICM commands are supported for container nodes, but most will be filtered out unless they use the -container option to specify a container other than iris, or either the -role or -machine option to limit the command to CN nodes (see ICM Commands and Options).

A load balancer may be assigned to CN nodes; see Role LB: Load Balancer. CN nodes cannot be deployed in containerless mode.

Role BH: Bastion Host

You may want to deploy a configuration that offers no public network access. If you have an existing private network, you can launch ICM on a node on that network and deploy within it. If you do not have such a network, you can have ICM configure a private subnet and deploy your configuration on it. Since ICM is not running within that private subnet, however, it needs a means of access to provision, deploy, and manage the configuration. The BH node serves this purpose.

A bastion host is a host node that belongs to both the private subnet configured by ICM and the public network, and can broker communication between them. To use one, you define a single BH node in your definitions file and set PrivateSubnet to true in your defaults file. For more information, see Deploying on a Private Network.

ICM Cluster Topology and Mirroring

ICM validates the node definitions in the definitions file to ensure they meet certain requirements; there are additional rules for mirrored configurations. Bear in mind that this validation does not include preventing configurations that are not functionally optimal, for example a single AM node, a single WS node, five DATA nodes with just one COMPUTE node or vice-versa, and so on.

In both nonmirrored and mirrored configurations,

  • In a sharded cluster, COMPUTE nodes are assigned to DATA nodes (and QS nodes to DS nodes) in round-robin fashion.

  • If both AM and WS nodes are included, AM nodes are bound to the DM and WS nodes to the AM nodes; if just AM nodes or just WS nodes are included, they are all bound to the DM.

This section contains the following subsections:

Rules for Mirroring

All data nodes in a sharded cluster must be mirrored, or all unmirrored. This requirement is reflected in the following ICM topology validation rules.

When the Mirror field is set to false in the defaults file (the default), mirroring is never configured, and provisioning fails if more than one DM node is specified in the definitions file.

When the Mirror field is set to true, mirroring is configured where possible, and the mirror roles of the DATA, DS, or DM nodes (primary, backup, or DR async) are determined by the value of the MirrorMap field (see General Parameters) in the node definition, as follows:

  • If MirrorMap is not included in the relevant node definition, the nodes are configured as mirror failover pairs using the default MirrorMap value, primary,backup:

    • If an even number of DATA or DS nodes is defined, they are all configured as failover pairs; for example, specifying six DATA nodes deploys three data node mirrors containing failover pairs and no DR asyncs. If an odd number of DATA or DS nodes is defined, provisioning fails.

    • If two DM nodes are defined, they are configured as a failover pair; if any other number is defined, provisioning fails.

  • If MirrorMap is included in the node definition, the nodes are configured according to its value, as follows:

    • The number of DATA or DS nodes must be a multiple of the number of roles specified in the MirrorMap value or fewer. For example, suppose the MirrorMap value, is primary,backup,async, as shown:

      "Role": "DATA",
      "Count": "",
      "MirrorMap": "primary,backup,async"
      
      

      In this case, DATA or DS nodes would be configured as follows:

      Value of Count Result
      3 or multiples of 3 One or more mirrors containing a failover pair and a DR async
      2 A single mirror containing a failover pair
      1, 4 or more but not multiples of 3 Provisioning fails
    • The number of DM nodes must be the same as the number of roles specified in the MirrorMap value or fewer; if a single DM node is specified, provisioning fails.

  • If more than one AR (arbiter) node is specified, provisioning fails. (While a best practice, use of an arbiter is optional, so an AR node need not be included in a mirrored configuration.)

All asyncs deployed by ICM are DR asyncs; reporting asyncs are not supported. Up to 14 asyncs can be included in a mirror. For information on mirror members and possible configurations, see Mirror ComponentsOpens in a new tab in the High Availability Guide.

There is no relationship between the order in which DATA, DS, or DM nodes are provisioned or configured and their roles in a mirror. Following provisioning, you can determine which member of each pair is the intended primary failover member and which the backup using the icm inventory command. To see the mirror member status of each node in a deployed configuration when mirroring is enabled, use the icm ps command.

Nonmirrored Configuration Requirements

A nonmirrored cluster consists of the following:

  • One or more DATA (data nodes in a sharded cluster).

  • If DATA nodes are included, zero or more COMPUTE (compute nodes in a sharded cluster); best practices are at least as many COMPUTE nodes as DATA nodes and the same number of COMPUTE nodes for each DATA node.

  • If no DATA nodes are included:

    • Exactly one DM (distributed cache cluster data server, standalone InterSystems IRIS instance, shard master data server in namespace-levelOpens in a new tab sharded cluster).

    • Zero or more AM (distributed cache cluster application server).

    • Zero or more DS (shard data servers in namespace-level sharded cluster).

    • Zero or more QS (shard query servers in namespace-level sharded cluster, cannot be deployed without corresponding DS nodes)

  • Zero or more WS (web servers).

  • Zero or more LB (load balancers).

  • Zero or more VM (virtual machine nodes).

  • Zero or more CN (container nodes).

  • Zero or one BH (bastion host).

  • Zero AR (arbiter node is for mirrored configurations only).

The relationships between some of these nodes types are pictured in the following examples.

ICM Nonmirrored Topologies
null
null

Mirrored Configuration Requirements

A mirrored cluster consists of:

  • If DATA nodes (data nodes in a node-level sharded cluster) are included:

    • A number of DATA matching the MirrorMap value, default or explicit, as described in Rules for Mirroring.

    • Zero or more COMPUTE (compute nodes in a node-level sharded cluster); best practices are at least one COMPUTE node per DATA node mirror, and the same number of COMPUTE nodes for each DATA node mirror.

  • If no DATA nodes are included:

    • Two DM as a mirrored shard master data server in a namespace-level sharded cluster, data server in a distributed cache cluster, or standalone InterSystems IRIS instance, or more than two if DR asyncs are specified by the MirrorMap field, as described in Rules for Mirroring.

    • If a namespace-level sharded cluster:

      • A number of DS (shard data servers) matching the MirrorMap value, default or explicit, as described in Rules for Mirroring.

      • Zero or more QS (shard query servers), as described in the foregoing for COMPUTE nodes.

    • Zero or more AM as application servers in a distributed cache cluster.

  • Zero or one AR (arbiter node is optional but recommended for mirrored configurations).

  • Zero or more WS (web servers).

  • Zero or more LB (load balancers).

  • Zero or more VM (virtual machine nodes).

  • Zero or more CN (container nodes).

  • Zero or one BH (bastion host).

The following fields are required for mirroring:

  • Mirroring is enabled by setting key Mirror in your defaults.json file to true.

    "Mirror": "true"
    
  • To include DR asyncs in DATA, DS, or DM mirrors, you must include the MirrorMap field in your definitions file to specify that those beyond the first two are DR async members. The value of MirrorMap must always begin with primary,backup, for example:

    "Role": "DM",
    "Count": "5”,
    "MirrorMap": "primary,backup,async,async,async",
    ...
    

    For information on the relationship between the MirrorMap value and the number of DATA, DS, or DM nodes defined, see Rules for Mirroring. MirrorMap can be used in conjunction with the Zone and ZoneMap fields to deploy async instances across zones; see Deploying Across Multiple Zones.

Automatic LB deployment (see Role LB: Load Balancer) is supported for providers AWS, GCP, Azure, and Tencent; when creating your own load balancer, the pool of IP addresses to include are those of DATA, COMPUTE, AM, or WS nodes, as called for by your configuration and application.

Note:

A mirrored DM node that is deployed without AM or WS nodes or a load balancer (LB node) must have some appropriate mechanism for redirecting application connections following failover; see Redirecting Application Connections Following Failover or Disaster RecoveryOpens in a new tab in the “Mirroring” chapter of the High Availability Guide for more information.

The relationships between some of these nodes types are pictured in the following examples.

ICM Mirrored Topologies
null
null

Storage Volumes Mounted by ICM

On each node on which it deploys an InterSystems IRIS container, ICM formats, partitions, and mounts four volumes for persistent data storage by InterSystems IRIS using the durable %SYS feature (see Durable %SYS for Persistent Instance DataOpens in a new tab in Running InterSystems IRIS in Containers). The volumes are mounted as separate device files under /dev/ on the host node, with the filenames determined by the fields DataDeviceName (for the data volume), WIJDeviceName (for the volume containing the WIJ directoryOpens in a new tab), and Journal1DeviceName and Journal2DeviceName (for the primary and alternate journal directoriesOpens in a new tab). The sizes of these volumes can be specified using the DataVolumeSize, WIJVolumeSize, Journal1VolumeSize, and Journal2VolumeSize parameters (see General Parameters).

For all providers other than type PreExisting, ICM attempts to assign reasonable defaults for the device names, as shown in the following table. The values are highly platform and OS-specific, however, and may need to be overridden in your defaults.json file. (For PreExisting deployments, see Storage Volumes in the “Deploying on a Preexisting Cluster” appendix.)

Parameter Device Name of Persistent Volume for

AWS

GCP

Azure

Tencent

vSphere

DataDeviceName

Databases

xvdd

sdc

sdd

vdc

sdc

WIJDeviceName

WIJ directory

xvde

sdd

sde

vdd

sdd

Journal1DeviceName

Primary journal directory

xvdf

sde

sdf

vde

sde

Journal2DeviceName

Alternate journal directory

xvdg

sdf

sdg

vdf

sdf

ICM mounts the devices within the InterSystems IRIS container according to the fields shown in the following table:

Parameter Default

DataMountPoint

/irissys/data

WIJMountPoint

/irissys/wij

Journal1MountPoint

/irissys/journal1

Journal2MountPoint

/irissys/journal2

This arrangement allows you to easily follow the recommended best practice of supporting performance and recoverability by using separate file systems for storage by InterSystems IRIS, as described in Separating File Systems for Containerized InterSystems IRISOpens in a new tab in Running InterSystems Products in Containers, simply by accepting the defaults.

If your machine image already has mount points ready for use, you can provide the special device name existing as the value of a device name parameter to direct ICM to skip volume allocation and use the directory you specify in the corresponding mountpoint parameter. For example, if the value of DataDeviceName is existing and the value of DataMountPoint is /mnt/data, ICM mounts /mnt/data as the data volume for InterSystems IRIS instances. If the directory specified in the mount point parameter does not exist, no data volume is mounted and an error is displayed during provisioning. Existing directories must be writable by user irisowner (UID 51773); see Security for InterSystems IRIS ContainersOpens in a new tab in Running InterSystems Products in Containers. (Note that this is the default device name and behavior for provider PreExisting.)

InterSystems IRIS Licensing for ICM

InterSystems IRIS instances deployed in containers require licenses just as do noncontainerized instances. General InterSystems IRIS license elements and procedures are discussed in the “LicensingOpens in a new tab” chapter of the System Administration Guide.

License keys cannot be included in InterSystems IRIS container images, but must be added after the container is created and started. ICM addresses this as follows:

  • The needed license keys are staged in a directory within the ICM container, or on a mounted volume, that is specified by the LicenseDir field in the defaults.json file, for example /Samples/License.

  • One of the license keys in the staging directory is specified by the LicenseKey field in each definition of node types DATA, COMPUTE, DM, AM, DS, and QS in the definitions.json file, for example:

    "Role": "DM",
    "LicenseKey": "ubuntu-sharding-iris.key”,
    "InstanceType": "m4.xlarge",
    
  • ICM configures a license server on DATA node 1 or the DM node, which serves the specified licenses to the InterSystems IRIS nodes (including itself) during deployment.

Important:

All files staged in the directory indicated by the LicenseDir field and specified by the LicenseKey field must be valid InterSystems IRIS license key files with the .key suffix.

All nodes on which an InterSystems IRIS container is deployed require a sharding-enabled InterSystems IRIS license, regardless of the particular configuration involved.

No license is required for AR, LB, WS, VM, and CN nodes; if included in the definition for one of these, the LicenseKey field is ignored.

A license is optional for SAM nodes, but if one is provided, it will be used.

ICM Security

The security measures included in ICM are described in the following sections:

For information about the ICM fields used to specify the files needed for the security described here, see Security-Related Parameters.

Host Node Communication

A host node is the host machine on which containers are deployed. It may be virtual or physical, running in the cloud or on-premises.

ICM uses SSH to log in to host nodes and remotely execute commands on them, and SCP to copy files between the ICM container and a host node. To enable this secure communication, you must provide an SSH public/private key pair and specify these keys in the defaults.json file as SSHPublicKey and SSHPrivateKey. During the configuration phase, ICM disables password login on each host node, copies the private key to the node, and opens port 22, enabling clients with the corresponding public key to use SSH and SCP to connect to the node.

Other ports opened on the host machine are covered in the sections that follow.

Docker

During provisioning, ICM downloads and installs a specific version of Docker from the official Docker web site using a GPG fingerprint. ICM then copies the TLS certificates you provide (located in the directory specified by the TLSKeyDir field in the defaults file) to the host machine, starts the Docker daemon with TLS enabled, and opens port 2376. At this point clients with the corresponding certificates can issue Docker commands to the host machine.

Weave Net

During provisioning, ICM launches Weave Net with options to encrypt traffic and require a password (provided by the user) from each machine joining the Weave network. To enable these options, set WeavePassword to the any value other than null in the defaults.json file.

InterSystems IRIS

For a comprehensive overview of InterSystems IRIS security, see About InterSystems SecurityOpens in a new tab.

Security Level

ICM expects that the InterSystems IRIS image was installed with Normal security (as opposed to Minimal or Locked Down).

Predefined Account Password

To secure the InterSystems IRIS instance, the default password for predefined accounts must be changed by ICM. The first time ICM runs the InterSystems IRIS container, passwords on all enabled accounts with non-null roles are changed to a password provided by the user. If you don’t want the InterSystems IRIS password to appear in the definitions files, or in your command-line history using the -iscPassword option, you can omit both; ICM interactively prompts for the password, masking your typing. Because passwords are persisted, they are not changed when the InterSystems IRIS container is restarted or upgraded.

JDBC

ICM opens JDBC connections to InterSystems IRIS in TLS mode (as required by InterSystems IRIS), using the files located in the directory specified by the TLSKeyDir field in the defaults file.

Mirroring

ICM creates mirrors with TLS enabled (see the “MirroringOpens in a new tab" chapter of the High Availability Guide), using the files located in the directory specified by the TLSKeyDir field in the defaults file. Failover members can join a mirror only if TLS enabled.

InterSystems Web Gateway

ICM configures WS nodes to communicate with DM and AM nodes using TLS, using the files located in the directory specified by the TLSKeyDir field in the defaults file.

InterSystems ECP

ICM configures all InterSystems IRIS nodes to use TLS for ECP connections, which includes connections between distributed cache cluster nodes and sharded cluster nodes.

Centralized Security

InterSystems recommends the use of an LDAP server to implement centralized security across the nodes of a sharded cluster or other ICM deployment. For information about using LDAP with InterSystems IRIS, see LDAP GuideOpens in a new tab.

Private Networks

ICM can deploy on an existing private network (not accessible from the Internet) if you configure the access it requires. ICM can also create a private network on which to deploy and configure its own access through a bastion host. For more information on using private networks, see Deploying on a Private Network.

Deploying with Customized InterSystems IRIS Configurations

Every InterSystems IRIS instance, including the one running within an InterSystems IRIS container, is installed with a file in the installation directory named iris.cpf, which contains most of its configuration settings. The instance reads this configuration parameter file, or CPF, at startup to obtain the values for these settings. When a setting is modified, the CPF is automatically updated. The use and contents of the CPF are described in detail in the Configuration Parameter File ReferenceOpens in a new tab.

However, you may want to deploy multiple instances from the same image but with different configuration settings. You can do this using the ISC_CPF_MERGE_FILE environment variable, which lets you specify a separate file containing one or more settings to be merged into the CPF of an instance. The configuration merge feature can be used to deploy multiple instances with differing CPFs from the same source. For more information on the configuration merge feature, see Automating Configuration of InterSystems IRIS with Configuration MergeOpens in a new tab.

You can take advantage of this feature when deploying InterSystems IRIS with ICM by using the UserCPF property, which specifies the configuration merge file to be applied to iris containers or containerless installations. For example, the [config] section of the CPF included in InterSystems IRIS images from InterSystems contains the default shared memory heap configuration (see Configuring Shared Memory Heap (gmheap)Opens in a new tab in the “Configuring InterSystems IRIS” chapter of the System Administration Guide), which looks like this:

[config]
LibPath=
MaxServerConn=1
MaxServers=2
...
gmheap=37568
...

To double the size of the shared memory heap for all InterSystems IRIS instances in your deployment, you could create a file called merge.cpf in the ICM container with the following contents:

[config]
gmheap=75136

You would then specify this merge file in your defaults.json using the UserCPF field, as follows:

"UserCPF": "/Samples/mergefiles/merge.cpf"

This would cause the CPF of each InterSystems IRIS instance deployed to be updated with the new shared memory heap size before the instance is started.

You can also use this field in your definitions file to apply merge files only to specific node types. For example, to double the size of the shared memory heap only on the DM node in a distributed cache cluster, while at the same time changing the ECP Time to wait for recovery setting on the AM nodes from the default 1200 seconds to 1800, you would create another file called merge2.cpf with the following contents:

[ECP]
ClientReconnectDuration=1800

You would then use a definitions.json file like the following:

[
    {
        "Role": "DM",
        "Count": "1",
        "UserCPF": "/Samples/mergefiles/merge.cpf"
    },
    {
        "Role": "AM",
        "Count": "3",
        "StartCount": "2",
        "UserCPF": "/Samples/mergefiles/merge2.cpf",
        "LoadBalancer": "true"
    }
]

This would double the shared memory heap size on the DM node but not on the AM nodes, and change the ECP setting on the AM nodes but not on the DM node.

Deploying Across Multiple Zones

Cloud providers generally allow their virtual networks to span multiple zones within a given region. For some deployments, you may want to take advantage of this to deploy different nodes in different zones. For example, if you deploy a mirrored sharded cluster in which each data node includes a failover pair and a DR async (see Mirrored Configuration Requirements), you can accomplish the cloud equivalent of putting physical DR asyncs in remote data centersOpens in a new tab by deploying the failover pair and the DR async in two different zones.

To specify multiple zones when deploying on AWS, GCP, Azure, and Tencent, populate the Zone field in the defaults file with a comma-separated list of zones. Here is an example for AWS:

{
    "Provider": "AWS",
    ...
    "Region": "us-west-1",
    "Zone": "us-west-1b,us-west-1c"
}

For GCP:


    "Provider": "GCP",
    ...
    "Region": "us-east1",
    "Zone": "us-east1-b,us-east1-c"
}

For Azure:

    "Provider": "Azure",
    ...
    "Region": "Central US",
    "Zone": "1,2"

For Tencent:

    "Provider": "Tencent",
    ...
    "Region": "na-siliconvalley",
    "Zone": "na-siliconvalley-1,na-siliconvalley-2"

The specified zones are assigned to nodes in round-robin fashion. For example, if you use the AWS example and provision four nonmirrored DATA nodes, the first and third will be provisioned in us-west-1b, the second and fourth in us-west-1c.

For mirrored configuration, round-robin distribution may lead to undesirable results, however; for example, the preceding Zone specifications would place the primary and backup members of mirrored DATA, DM, or DS nodes in different zones, which might not be appropriate for your application due to higher latency between the members (see Network Latency ConsiderationsOpens in a new tab in the High Availability Guide). To choose which nodes go in which zones, you can add the ZoneMaps field to a node definition in the definitions.json file to specify a particular zone specified by the Zone field for a single node or a pattern for zone placement for multiple nodes. This is shown in the following specifications for a distributed cache cluster with a mirrored data server:

defaults.json

"Mirror": "True"
"Region": "us-west-1",
"Zone": "us-west-1a,us-west-1b,us-west-1c"

definitions.json

"Role": "DM",
"Count": "4”,
"MirrorMap": "primary,backup,async,async",
"ZoneMap": "0,0,1,2",
...
"Role": "AM",
"Count": "3”,
"MirrorMap": "primary,backup,async,async",
"ZoneMap": "0,1,2",
...
"Role": "AR",
...

This places the primary and backup mirror members in us-west-1a and one application server in each zone, while the asyncs are in different zones from the failover pair to maximize their availability if needed — the first in us-west-1b and the second in us-west-1c. The arbiter node does not need a ZoneMap field to be placed in us-west-1a with the failover pair; round-robin distribution will take care of that.

You could also use this approach with a mirrored sharded cluster in which each data node mirror contains a failover pair and a DR async, as follows:

defaults.json

"Mirror": "True"
"Region": "us-west-1",
"Zone": "us-west-1a,us-west-1b,us-west-1c"

definitions.json:

"Role": "DATA",
"Count": "12”,
"MirrorMap": "primary,backup,async",
"ZoneMap": "0,0,1",
...
"Role": "COMPUTE",
"Count": "8”,
"ZoneMap": "0",
...
"Role": "AR",
"ZoneMap": "2",
...

This would place the failover pair of each of the four data node mirrors and the eight compute nodes in us-west-1a, the DR async of each data node mirror in us-west-1b, and the arbiter in us-west-1c.

Deploying Across Multiple Regions or Providers

ICM can deploy across multiple cloud provider regions. For example, you may want to place DR async mirror members in a different region from their failover members. The procedures for multiregion deployment vary between providers, and are described in the following sections. The procedure described in the third section can also be used to deploy across multiple providers.

Important:

Although the failover members of a mirror can be deployed in different regions or on different platforms, this is not recommended due to the problems in mirror operation caused by the typically high network latency between regions and platforms. For more information on latency considerations for mirrors, see Network Latency ConsiderationsOpens in a new tab in the “Mirroring” chapter of the High Availability Guide.

Note:

Deployment across regions and deployment on a private network, as described in Deploying on a Private Network, are not compatible in this release.

Deploying Across Multiple Regions on GCP

To deploy across multiple regions on GCP, specify the desired regions as a comma-separated list in the Region field in the defaults file, as shown:

{
    "Provider": "GCP",
    "Label": "Sample",
    "Tag": "multi",
    "Region": "us-east1,us-west1",
    "Zone": "us-east1-b,us-west1-a",
    ...
}

By default, nodes within each definition are assigned a region in round-robin fashion. For example, suppose you are deploying with the fields shown above in defaults.json and the following definitions.json:

[
  {
    "Role": "DATA",
    "Count": "2"
  },
  {
    "Role": "AR",
    "Count": "1",
    "DockerImage": "intersystems/arbiter:latest-em"
  }
]

In this case, the output of the icm inventory command might look like this:

$ icm inventory
Machine               IP Address    DNS Name              Region    Zone
-------               ----------    --------              ------    ----
Acme-AR-multi-0001    35.179.173.90 acmear1.google.com    us-east1     b
Acme-DATA-multi-0001- 35.237.131.39 acmedata1.google.com  us-east1     b
Acme-DATA-multi-0002+ 35.233.223.64 acmedata2.google.com  us-west1     a

For control over the regions that nodes are deployed in, you can use the RegionMap field to map the defined nodes to the specified regions. When RegionMap is included in a node definition, ZoneMap (described in the preceding section, Deploying Across Multiple Zones) must also be included to map the node or nodes to the desired zone or zones. For example, suppose you are deploying a mirror containing a failover pair and a DR async with an arbiter, and you want the failover pair in one region but in different zones, and the async and arbiter in a different region and also in different zones. The files you might use and the output you might see from the icm inventory and icm ps commands are shown in the following:

defaults.json

{
    "Provider": "GCP",
    "Label": "Sample",
    "Tag": "multi",
    "Region": "us-east1,us-west1",
    "Zone": "us-east1-a,us-east1-b,us-west1-a,us-west1-b",
    ...
}

definitions.json

[
  {
    "Role": "DATA",
    "Count": "3",
    "MirrorMap": "primary,backup,async",
    "RegionMap": "1,1,2",
    "ZoneMap": "1,2,3",
  },
  {
    "Role": "AR",
    "Count": "1",
    "DockerImage": "intersystems/arbiter:latest-em",
    "RegionMap": "2",
    "ZoneMap": "4"
  }
]

icm inventory

Machine               IP Address    DNS Name              Region    Zone
-------               ----------    --------              ------    ----
Acme-AR-multi-0001    35.179.173.90 acmear1.google.com    us-west1     b
Acme-DATA-multi-0001+ 35.237.131.39 acmedata1.google.com  us-east1     b
Acme-DATA-multi-0002- 35.233.223.64 acmedata2.google.com  us-east1     a
Acme-DATA-multi-0003  35.166.127.82 acmedata3.google.com  us-west1     a

icm ps

Machine              IP Address    Container Status Health  Mirror    Image
-------              ----------    --------- ------ ------  ------    -----
Acme-AR-multi-0001   35.179.173.90 arbiter   Up     healthy           intersystems/arbiter:latest-em
Acme-DATA-multi-0001 35.237.131.39 iris      Up     healthy PRIMARY   intersystems/iris:latest-em
Acme-DATA-multi-0002 35.233.223.64 iris      Up     healthy BACKUP    intersystems/iris:latest-em
Acme-DATA-multi-0003 35.166.127.82 iris      Up     healthy CONNECTED intersystems/iris:latest-em

To use the Network field (see Google Cloud Platform (GCP) Parameters) to specify an existing network to use in a multiregion deployment, you must also use the GCP Subnet field to specify a unique subnet for each region specified by the Region field. For example, for a deployment on regions us-west1 and us-east1, as illustrated here, you might include the following in your defaults file:

"Network": "acme-network",
"Subnet": "acme-subnet-data-east,acme-subnet-data-west"

Note:

A GCP multiregion deployment cannot include load balancers (LB nodes) because load balancers are restricted to a single region on GCP.

Deploying Across Multiple Regions on Azure

To deploy across multiple regions on Azure, specify the desired regions as a comma-separated list in the Location field in the defaults file, as shown:

{
    "Provider": "Azure",
    "Label": "Sample",
    "Tag": "multi",
    "Location": "East US,Central US",
    ...
}

By default, nodes within each definition are assigned a location in round-robin fashion. For example, suppose you are deploying with the fields shown above in defaults.json and the following definitions.json:

[
  {
    "Role": "DATA",
    "Count": "2"
  },
  {
    "Role": "AR",
    "Count": "1",
    "DockerImage": "intersystems/arbiter:latest-em"
  }
]

In this case, the output of the icm inventory command might look like this:

$ icm inventory
Machine               IP Address    DNS Name             Region    Zone
-------               ----------    --------             ------    ----
Acme-AR-multi-0001    35.179.173.90 acmear1.azure.com    East US    1
Acme-DATA-multi-0001- 35.237.131.39 acmedata1.azure.com  East US    1
Acme-DATA-multi-0001+ 35.233.223.64 acmedata2.azure.com  Central US 1

For control over the regions that nodes are deployed in, you can use the LocationMap field to map the defined nodes to the specified regions. When LocationMap is included in a node definition, ZoneMap (described in the preceding section, Deploying Across Multiple Zones) must also be included to map the node or nodes to the desired zone or zones. For example, suppose you are deploying a mirror containing a failover pair and a DR async with an arbiter, and you want the failover pair in one region but in different zones, and the async and arbiter in a different region and also in different zones. The files you might use and the output you might see from the icm inventory and icm ps commands are shown in the following. (Note that Azure zones are identified by the same integers in every region, so ZoneMap identifies only the desired zone within whatever region is specified by LocationMap.)

defaults.json

{
    "Provider": "Azure",
    "Label": "Sample",
    "Tag": "multi",
    "Location": "East US,Central US",
    "Zone": "1,2",
    ...
}

definitions.json

[
  {
    "Role": "DATA",
    "Count": "3",
    "MirrorMap": "primary,backup,async",
    "LocationMap": "1,1,2",
    "ZoneMap": "1,2,1"
  },
  {
    "Role": "AR",
    "Count": "1",
    "DockerImage": "intersystems/arbiter:latest-em",
    "RegionMap": "2",
    "ZoneMap": "2"
  }
]

icm inventory

Machine               IP Address    DNS Name             Region     Zone
-------               ----------    --------             ------     ----
Acme-AR-multi-0001    35.179.173.90 acmear1.azure.com    Central US 2
Acme-DATA-multi-0001+ 35.237.131.39 acmedata1.azure.com  East US    1
Acme-DATA-multi-0002- 35.233.223.64 acmedata2.azure.com  East US    2
Acme-DATA-multi-0003  35.166.127.82 acmedata3.google.com Central US 1

icm ps

Machine              IP Address    Container Status Health  Mirror    Image
-------              ----------    --------- ------ ------  ------    -----
Acme-AR-multi-0001   35.179.173.90 arbiter   Up     healthy           intersystems/arbiter:latest-em
Acme-DATA-multi-0001 35.237.131.39 iris      Up     healthy PRIMARY   intersystems/iris:latest-em
Acme-DATA-multi-0002 35.233.223.64 iris      Up     healthy BACKUP    intersystems/iris:latest-em
Acme-DATA-multi-0003 35.166.127.82 iris      Up     healthy CONNECTED intersystems/iris:latest-em

If you want to use an existing virtual network in a multiregion Azure deployment, you must include the ResourceGroupName and VirtualNetwork fields (see Microsoft Azure (Azure) Parameters) in the defaults file to specify a network for each region specified in the Location field, for example:

{
    "Provider": "Azure",
    "Label": "Sample",
    "Tag": "multi",
    "Location": "East US,Central US",
    "Zone": "1,2",
    "ResourceGroupName": "sample-resource-group",
    "VirtualNetworkName": "sample-vnet-east,sample-vnet-central"
    ...
}

The specified networks must have nonoverlapping address spaces. In the accompanying definitions file, each definition must include the Subnetname field specifying a unique subnet for each region specified by the Location field. For example, for the mirrored deployment illustrated in this section, if the defaults file included the ResourceGroupName and VirtualNetwork fields, the definitions file might look like the following. Because the AR definition deploys in the Central US region only, just one subnet is required in that definition.

[
  {
    "Role": "DATA",
    "Count": "3",
    "MirrorMap": "primary,backup,async",
    "RegionMap": "1,1,2",
    "ZoneMap": "1,2,1",
    "SubnetName": "acme-subnet-data-east,acme-subnet-data-central"
  },
  {
    "Role": "AR",
    "Count": "1",
    "DockerImage": "intersystems/arbiter:latest-em",
    "RegionMap": "2",
    "ZoneMap": "2",
    "SubnetName": "acme-subnet-arbiter-central"
  }
]
Note:

An Azure multiregion deployment cannot include load balancers (LB nodes) because load balancers are restricted to a single region on Azure.

Deploying Across Multiple Regions on AWS and Tencent

To deploy across multiple regions on AWS and Tencent, ICM first provisions the needed infrastructure in the separate regions, then merges that infrastructure and deploys services on it as if it were preexisting infrastructure.

This procedure can also be used to deploy across multiple providers. In this discussion, “region” is used to indicate “region or provider”, with differences between multiprovider and AWS/Tencent multiregion noted as needed.

The procedure for creating merged multiregion deployments involves the following steps:

  1. Provision the infrastructure in each region in separate ICM sessions.

  2. Merge the multiregion infrastructure using the icm merge command.

  3. Review the merged definitions.json file to reorder and update as needed.

  4. Reprovision the merged infrastructure using the icm provision command.

  5. Deploy services on the merged infrastructure as a Preexisting deployment using the icm run command.

  6. When unprovisioning the infrastructure, issue the icm unprovision command separately in the original session directories.

Provision the Infrastructure

The separate sessions for provisioning infrastructure in each region (specified by the Region field) should be conducted in separate working directories within the same ICM container. For example, you could begin by copying the provided /Samples/AWS directory (see Define the Deployment in the “Using ICM” chapter) to /Samples/AWS/us-east-1 and /Samples/AWS/us-west–1. Specify the desired region, node definitions, and features to match the eventual multiregion deployment in the defaulta and definitions file for each. For example, if you want to deploy a mirror failover pair in one region and a DR async member of the mirror in another, include the appropriate region and zones and "Mirror": "true" in the defaults files, and define two DMs (for the failover pair) in one region in its definitions file, a third DM (for the async) in the other, and a single AR (arbiter) node in one or the other. Each defaults file in a multiregion deployment should have a unique Label and/or Tag to prevent resource conflicts; this is not necessary for multiprovider deployments. This example is shown in the following.

Note:

If a given definition doesn't satisfy topology requirements for a single-region deployment, for example a single DM node defined when Mirror is set to true, disable topology validation by including "SkipTopologyValidation": "true" in the defaults file, as shown in the /Samples/AWS/us-west-1/defaults.json.

/Samples/AWS/us-east-1

defaults.json

{
    "Provider": "AWS",
    "Label": "Sample",
    "Tag": "east1",
    "Region": "us-east-1",
    "Zone": "us-east1-a,us-east1-b",
    "Mirror": "true",
    ...
}

definitions.json

[
  {
    "Role": "DM",
    "Count": "2",
    "ZoneMap": "1,2"
  }
]

/Samples/AWS/us-west-1/

defaults.json

{
    "Provider": "AWS",
    "Label": "Sample",
    "Tag": "west1",
    "Region": "us-west-1",
    "Zone": "us-west1-a,us-west1-b",
    "Mirror": "true",
    "SkipTopologyValidation": "true"},
    ...
}

definitions.json

[
  {
    "Role": "DM",
    "Count": "1",
    "ZoneMap": "1"
  }
  {
    "Role": "AR",
    "Count": "1",
    "ZoneMap": "2"
  }
]

Use the icm provision command in each working directory to provision the infrastructure in each region. The output of the icm inventory command, executed in each directory, shows you the infrastructure you are working with, for example:

/Samples/AWS/us-east-1

$ icm inventory
Machine             IP Address    DNS Name                        Region    Zone
-------             ----------    --------                        ------    ----
Acme-DM-east1-0001+ 54.214.230.24 ec2-54-214-230-24.amazonaws.com us-east-1 a
Acme-DM-east1-0002- 54.129.103.67 ec2-54-129-103-67.amazonaws.com us-east-1 b

/Samples/AWS/us-west-1

$ icm inventory
Machine             IP Address    DNS Name                        Region    Zone
-------             ----------    --------                        ------    ----
Acme-AR-west1-0001  54.181.212.79 ec2-54-181-212-79.amazonaws.com us-west-1 b
Acme-DM-west1-0002  54.253.103.21 ec2-54-253-103-21.amazonaws.com us-west-1 a

Merge the Provisioned Infractructure

The icm merge command scans the configuration files in the current working directory and those in the additional directory or directories specified to create merged configuration files that can be used for a Preexisting deployment in a specified new directory. For example, to merge the definitions and defaults files in /Samples/AWS/us-east–1 and /Samples/AWS/us-west–1 into a new set in /Samples/AWS/merge, you would issue the following commands:

$ cd /Samples/AWS/us-east-1
$ mkdir ../merge
$ icm merge -options ../us-west1 -localPath /Samples/AWS/merge

In the icm merge command, -options specifies a comma-separated list of the provisioning directories to be merged with the local one, and --localPath specifies the destination directory for the merged definitions. (For more information on the -options option, which lets you include Docker arguments on the ICM command line, see Using ICM with Custom and Third-Party Containers.)

Review the Merged Definitions File

When you examine the new configuration files, you will see that Provider has been changed to PreExisting in the merged defaults file. (The previous Provider field and others have been moved into the definitions file; they are displayed by the icm inventory command, but otherwise have no effect.) The Label and/or Tag can be modified if desired.

The definitions in the merged definitions file have been converted for use with provider PreExisting. As described in Definitions File for PreExisting in the appendix “Deploying on a Preexisting Cluster”, the definitions.json file for a Preexisting deployment contains exactly one entry per node (rather than one entry per role with a Count field to specify the number of nodes of that role). Each node is identified by its IP address or fully-qualified domain name. Either the IPAddress or DNSName field must be included in each definition, as well as the SSHUser field. (The latter specifies a nonroot user with passwordless sudo access, as described in SSH in “Deploying on a Preexisting Cluster”.) In the merged file, the definitions have been grouped by region, or by provider in multiprovider deployments; they should be reordered to reflect desired placement of mirror members, if necessary, and a suitable mirror map defined (see Mirrored Configuration Requirements and Deploying Across Multiple Zones). After review, the definitions file for our example would look like this:

[
    {
        "Role":"DM",
        "IPAddress":"54.214.230.24",
        "LicenseKey": "ubuntu-sharding-iris.key",
        "SSHUser": "icmuser",
        "MirrorMap": "primary,backup,async"
    },
    {
        "Role":"DM",
        "IPAddress":"54.129.103.67",
        "LicenseKey": "ubuntu-sharding-iris.key",
        "SSHUser": "icmuser",
        "MirrorMap": "primary,backup,async"
    },
    {
        "Role":"DM",
        "IPAddress":"54.253.103.21",
        "LicenseKey": "ubuntu-sharding-iris.key",
        "SSHUser": "icmuser",
        "MirrorMap": "primary,backup,async"
    },
    {
        "Role":"AR",
        "IPAddress":"54.181.212.79",
        "SSHUser": "icmuser",
        "StartCount": "4"
    }
]

Reprovision the Merged Infrastructure

Reprovision the merged infrastructure by issuing the icm provision command in the new directory (/Samples/AWS/merge in the example). The output of the icm inventory command shows the merged infrastructure in one list:

$ icm inventory
Machine             IP Address     DNS Name                       Region    Zone
-------             ----------     --------                       ------    ----
Acme-DM-east1-0001+ 54.214.230.24 ec2-54-214-230-24.amazonaws.com us-east-1 a
Acme-DM-east1-0002- 54.129.103.67 ec2-54-129-103-67.amazonaws.com us-east-1 b
Acme-AR-west1-0001  54.181.212.79 ec2-54-181-212-79.amazonaws.com us-west-1 b
Acme-DM-west1-0002  54.253.103.21 ec2-54-253-103-21.amazonaws.com us-west-1 a

Deploy Services on the Merged Infrastructure

Use the icm run command to deploy services on your merged infrastructure, as you would for any deployment, for example

$ icm run
...
-> Management Portal available at: http://112.97.196.104.google.com:52773/csp/sys/UtilHome.csp
$ icm ps
Machine            IP Address    Container Status Health  Mirror    Image
-------            ----------    --------- ------ ------  ------    -----
Acme-AR-multi-0001 35.179.173.90 arbiter   Up     healthy           intersystems/arbiter:latest-em
Acme-DM-multi-0001 35.237.131.39 iris      Up     healthy PRIMARY   intersystems/iris:latest-em
Acme-DM-multi-0002 35.233.223.64 iris      Up     healthy BACKUP    intersystems/iris:latest-em
Acme-DM-multi-0003 35.166.127.82 iris      Up     healthy CONNECTED intersystems/iris:latest-em

Unprovision the Merged Infrastructure

When the time comes to unprovision the multiregion deployment, return to the original working directories to issue the icm unprovision command, and then delete the merged working directory. In our example, you would do the following:

$ cd /Samples/AWS/us-east-1
$ icm unprovision -force -cleanUp
...
...completed destroy of Acme-east1
$ cd /Samples/AWS/us-west-1
$ icm unprovision -force -cleanUp
...
...completed destroy of Acme-west1
$ rm -rf /Samples/AWS/merge

Deploying on a Private Network

ICM configures the firewall on each host node to expose the only the ports and protocols required for its intended role. For example, the ISCAgent port is exposed only if mirroring is enabled and the role is one of AR, DATA, DM, or DS.

However, you may not want your configuration accessible from the public Internet at all. When this is the case, you can use ICM to deploy a configuration on a private network, so that it offers no direct public access. If ICM itself is deployed on that network, it is able to provision and deploy in the normal manner, but if it is not, you must provision a node outside the public network that gives ICM access to that network, called a bastion host. Given these factors, there are three approaches to using a private network:

  • Install and run ICM within an existing private network, which you describe to ICM using several fields, some of which vary by provider.

  • Have ICM provision a bastion host to give it access to the private network, and provision and deploy the configuration on either:

    • A private network created by ICM.

    • An existing private network, which you describe using the appropriate fields.

Deploy Within an Existing Private Network

If you deploy ICM on an existing private network and want to provision and deploy on that network, as shown in the following illustration, you need to add fields to the defaults and definitions files for the configuration you want to deploy.

ICM Deployed within Private Subnet
null

To deploy on an existing private network, follow these steps:

  1. Obtain access to a node that resides within the private network. This may require use of a VPN or intermediate host.

  2. Install Docker and ICM on the node as described in Launch ICM in the “Using ICM” chapter.

  3. Add the following fields to the defaults.json file:

    "PrivateSubnet": "true",
    "net_vpc_cidr": "10.0.0.0/16",
    "net_subnet_cidr": "10.0.2.0/24"
    

    The net_vpc_cidr and net_subnet_cidr fields (shown with sample values) specify the CIDRs of the private network and the node’s subnet within that network, respectively.

  4. Add the appropriate common and provider-specific fields to the defaults.json file, as follows:

    Provider

    Key

    Description

    all PrivateSubnet Must be set to true
      net_vpc_cidr CIDR of the private network
      net_subnet_cidr CIDR of the ICM node’s subnet within the private network (see Note)

    GCP

    Network

    Google VPC

      Subnet Google subnetwork

    Azure

    ResourceGroupName

    AzureRM resource group

     

    VirtualNetworkName

    AzureRM virtual network

     

    SubnetName

    AzureRM subnet (see Note)

    AWS (see Note)

    VPCId

    AWS VPC ID

     

    SubnetIds

    Comma-separated list of AWS subnet IDs, one for each element specified by the Zone field.

    Tencent

    VPCId

    Tencent VPC ID

     

    SubnetIds

    Comma-separated list of Tencent subnet IDs, one for each element specified by the Zone field.

    Note:

    On Azure, ICM assigns a security group to the subnet specified by SubnetName, which could affect the behavior of unrelated machines on the subnet. For this reason, a dedicated subnet (as specified by a unique SubnetName and corresponding net_subnet_cidr) must be provided for every entry in the definitions file (but ResourceGroupName and VirtualNetworkName remain in the defaults file). This includes the BH definition when deploying a bastion host, as described in the following section.

    To deploy InterSystems IRIS within an existing private VPC on AWS, you must create a node within that VPC on which you can deploy and use ICM. If you want to reach this ICM host from outside the VPC, you can specify a route table and Internet gateway for ICM to use instead of creating its own. To do this, add the RouteTableId and InternetGatewayId fields to your defaults.json file, for example:

    "RouteTableID": "rtb-00bef388a03747469",
    "InternetGatewayId": "igw-027ad2d2b769344a3"
    

    When provisioning on GCP, the net_subnet_cidr field is descriptive, not proscriptive; it should be an address space which includes the node’s subnet, as well as any others within the network should have access to the deployed configuration.

  5. Use icm provision and icm run to provision and deploy your configuration.

Bear the following in mind when deploying on a private network.

  • Viewing web pages on any node within the private network, for example the Management Portal, requires a browser that also resides within the private network, or for which a proxy or VPN has been configured.

  • Any DNS name shown in the output of ICM commands is just a copy of the local IP address.

  • Private network deployment across regions or providers is currently not supported.

Deploy on a Private Network Through a Bastion Host

If you set the PrivateSubnet field to true in the defaults file but don't include the fields required to use an existing network, ICM creates a private network for you. You cannot complete the provisioning phase in this situation, however, because ICM is unable to configure or otherwise interact with the machines it just allocated. To enable its interaction with nodes on the private network it creates, ICM can optionally create a bastion host, a host node that belongs to both the private subnet and the public network and can broker communication between them.

ICM Deployed Outside a Private Network with a Bastion Host
null

To create a private network and a bastion host providing ICM with access to that network, add a definition for a single node of type BH to the definitions.json file, for example:

   {
       "Role": "DATA",
       "Count": "3"
   },
   {
       "Role": "BH",
       "Count": "1",
       "StartCount: 4"
   }

To deploy and use a bastion host with an existing private network, add a BH definition to the definitions file, as above, and include the fields necessary to specify the network in the defaults file (as describe in the previous section). ICM automatically sets the "PrivateSubnet" option to "true" when a BH node definition is included in definitions.json

The bastion host can be accessed using SSH, allowing users to tunnel SSH commands to the private network. Using this technique, ICM is able to allocate and configure compute instances within the private network from outside, allowing provisioning to succeed, for example:

$ icm inventory
Machine             IP Address     DNS Name                     Region   Zone
-------             ----------     --------                     ------   ----
Acme-BH-TEST-0004   35.237.125.218 218.125.237.35.bc.google.com us-east1 b
Acme-DATA-TEST-0001 10.0.0.2       10.0.0.2                     us-east1 b
Acme-DATA-TEST-0002 10.0.0.3       10.0.0.3                     us-east1 b
Acme-DATA-TEST-0003 10.0.0.4       10.0.0.4                     us-east1 b

Once the configuration is deployed, it is possible to run the ssh command against any node, for example:

# icm ssh -role DATA -interactive
ubuntu@ip-10.0.0.2:~$

If you examine the command being run, however, you can see that it is routed through the bastion host:

$ icm ssh -role DATA -interactive -verbose
ssh -A -t -i /Samples/ssh/insecure -p 3022 ubuntu@35.237.125.218
ubuntu@ip-10.0.0.2:~$

On the other hand, for other commands to succeed, ICM needs access to ports and protocols besides SSH. To do this, ICM configures tunnels between the bastion host and nodes within the cluster for Docker, JDBC, and HTTP. This allows commands such as icm run, icm exec, and icm sql to succeed.

Bear the following in mind when deploying a bastion host:

  • The address of the configuration’s Management Portal is that of the bastion host.

  • For security reasons, no private keys are stored on the bastion host.

  • Any DNS name shown in the output of ICM commands is just a copy of the local IP address.

  • Provisioning of load balancers in a deployment that includes a bastion host is not supported.

  • Use of a bastion host with multiregion deployments (see Deploying Across Multiple Regions or Providers) and in distributed management mode (see the appendix “Sharing ICM Deployments”) are currently not supported.

    Note:

    When you create a custom VPC in the Google portal, you are required to create a default subnet. If you are provisioning with a bastion host and will use the subnet created by ICM, you should delete this default subnet before provisioning (or give it an address space that won't collide with the default address space 10.0.0.0/16).

Deploying InterSystems API Manager

The InterSystems API Manager (IAM) enables you to monitor and control traffic to and from your web-based APIs by routing it through a centralized gateway and forwarding API requests to appropriate target nodes. For complete information about IAM, see IAM GuideOpens in a new tab.

IAM is included in your ICM deployment when you define a CN node in your definitions file, include the IAM field with the value true, and specify the InterSystems iam image using the IAMImage field, for example:

[
    {
        "Role": "DATA",
        "Count": "1",
        "LicenseKey": "ubuntu-sharding-iris-with-iam.key"
    },
    {
        "Role": "CN",
        "Count": "1",
        "IAM": "true",
        "IAMImage": "intersystems/iam:2.0"
    }
]

The IAM container is deployed during the deployment phase (see The icm run Command). You can optionally also deploy a Postgres container by specifying a Postgres image using the PostgresImage field; its default value is shown in General Parameters.

Following successful deployment, a message like the example below is displayed:

$ icm run
...
-> IAM Portal available at: http://112.97.196.104.google.com:8080/overview#

IAM attaches to the InterSystems IRIS instance in the first (or only) iris container — for example, node 1 in a sharded cluster — to obtain an IAM-enabled license; if mirroring is enabled, this will be the primary of the first (or only) failover pair.

Note:

IAM cannot be deployed in containerless mode.

Monitoring in ICM

To monitor the InterSystems IRIS instances in any ICM deployment, you can include the System Alerting and Monitoring cluster monitoring solution.

You can also deploy third-party monitoring packages as part of your ICM configuration.

System Alerting and Monitoring

System Alerting and Monitoring, or SAM, is a cluster monitoring solution for InterSystems IRIS® data platform. Whatever configuration and platform your InterSystems IRIS-based application runs on, you can monitor with SAM. For complete information about SAM, see the System Alerting and Monitoring GuideOpens in a new tab.

SAM is included in your ICM deployment when you include a SAM node in your definitions file; the DockerImage field is required and must specify the InterSystems sam image, as shown in the following:

[
    {
        "Role": "DM",
        "Count": "4",
        "LicenseKey": "ubuntu-sharding-iris.key"
    },
    {
        "Role": "SAM",
        "Count": "1",
        "DockerImage": "intersystems/sam:2.0"
    }
]

The SAM application comprises five containers. The SAM Manager container is deployed during the deployment phase (see The icm run Command).

The other four containers — Prometheus, Alertmanager, Grafana, and Nginx — are deployed during the provisioning phase (see The icm provision Command). The images from which these containers are deployed can be specified using the PrometheusImage, AlertmanagerImage, GrafanaImage, and NginxImage fields; their default values are shown in General Parameters.

Following successful deployment, a message like the example below is displayed:

$ icm run
...
-> SAM Portal available at: http://112.97.196.104.google.com:8080/api/sam/app/index.csp#
Note:

SAM cannot be deployed in containerless mode.

Deploying Third-party Monitoring with ICM

You can deploy the third-party monitoring package of your choice (or any other third-party package) as part of your ICM configuration. The following example shows how to use the icm ssh command to add Weave Scope monitoring to all of the hosts in a deployment:

icm ssh -command "sudo curl -L git.io/scope -o /usr/local/bin/scope 2>&1"
icm ssh -command "sudo chmod +x /usr/local/bin/scope"
icm ssh -command "sudo /usr/local/bin/scope launch 2>&1"

Following these commands, you can access Weave Scope through port 4040 on any of the hosts displayed by the icm inventory command, that is, at http://hostname:4040.

Important:

This simple example is provided for illustration only. Weave Scope does not require authentication and is therefore inherently insecure; if port 4040 is open in your firewall, anybody who knows the URL can access your containers. Do not deploy third-party packages outside of a private network unless you are certain they are fully secured.

ICM Troubleshooting

When an error occurs during an ICM operation, ICM displays a message directing you to the log file in which information about the error can be found. Before beginning an ICM deployment, familiarize yourself with the log files and their locations as described in Log Files and Other ICM Files.

In addition to the topics that follow, please see Additional Docker/InterSystems IRIS ConsiderationsOpens in a new tab in Running InterSystems IRIS in Containers for information about important considerations when creating and running InterSystems IRIS images container images.

Host Node Restart and Recovery

When a cloud host node is shut down and restarted due to an unplanned outage or to planned action by the cloud provider (for example, for preventive maintenance) or user (for example, to reduce costs), its IP address and domain name may change, causing problems for both ICM and deployed applications (including InterSystems IRIS).

This behavior differs by cloud provider. GCP and Azure preserve IP address and domain name across host node restart by default, whereas this feature is optional on AWS and Tencent (see Elastic IP Feature).

Reasons a host node might be shut down include the following:

  • Unplanned outage

    • Power outage

    • Kernel panic

  • Preventive maintenance initiated by provider

  • Cost reduction strategy initiated by user

Methods for intentionally shutting down host nodes include:

  • Using the cloud provider user interface

  • Using ICM:

    icm ssh -command 'sudo shutdown'
    

Elastic IP Feature

The Elastic IP feature on AWS preserves IP addresses and domain names across host node restarts. ICM disables this feature by default, in part because it incurs additional charges on stopped machines (but not running ones). To enable this feature, set the ElasticIP field to true in your defaults.json file; be sure to review the feature for your provider (see Elastic IP AddressesOpens in a new tab in the AWS documentation or Elastic Public IPOpens in a new tab in the Tencent documentation).

Recovery and Restart Procedure

If the IP address and domain name of a host node change, ICM can no longer communicate with the node and a manual update is therefore required, followed by an update to the cluster. The Weave network deployed by ICM includes a decentralized discovery service, which means that if at least one host node has kept its original IP address, the other host nodes will be able to reach it and reestablish all of their connections with one another. However, if the IP address of every host node in the cluster has changed, an additional step is needed to connect all the nodes in the Weave network to a valid IP address.

The manual update procedure is as follows:

  1. Go to the web console of the cloud provider and locate your instances there. Record the IP address and domain name of each, for example:

    Node

    IP Address

    Domain Name

    ANDY-DATA-TEST-0001

    54.191.233.2

    ec2-54-191-233-2.amazonaws.com

    ANDY-DATA-TEST-0002

    54.202.223.57

    ec2-54-202-223-57.amazonaws.com

    ANDY-DATA-TEST-0003

    54.202.223.58

    ec2-54-202-223-58.amazonaws.com

  2. Edit the instances.json file (see The Instances File in the chapter “Essential ICM Elements”) and update the IPAddress and DNSName fields for each instance, for example:

    "Label" : "SHARDING",
    "Role" : "DATA",
    "Tag" : "TEST",
    "MachineName" : "ANDY-DATA-TEST-0001",
    "IPAddress" : "54.191.233.2",
    "DNSName" : "ec2-54-191-233-2.amazonaws.com",
    
  3. Verify that the values are correct using the icm inventory command:

    $ icm inventory
    Machine                 IP Address    DNS Name                        Region   Zone
    -------                 ----------    --------                        ------   ----
    ANDY-DATA-TEST-0001 54.191.233.2  ec2-54-191-233-2.amazonaws.com  us-east1 b
    ANDY-DATA-TEST-0002 54.202.223.57 ec2-54-202-223-57.amazonaws.com us-east1 b
    ANDY-DATA-TEST-0003 54.202.223.58 ec2-54-202-223-58.amazonaws.com us-east1 b
    
  4. Use the icm ps command to verify that the host nodes are reachable:

    
    $ icm ps -container weave
    Machine                   IP Address      Container   Status   Health    Image
    -------                   ----------      ---------   ------   ------    -----
    ANDY-DATA-TEST-0001   54.191.233.2    weave       Up                 weaveworks/weave:2.0.4
    ANDY-DATA-TEST-0002   54.202.223.57   weave       Up                 weaveworks/weave:2.0.4
    ANDY-DATA-TEST-0003   54.202.223.58   weave       Up                 weaveworks/weave:2.0.4
    
    
  5. If all of the IP addresses have changed, select one of the new addresses, such as 54.191.233.2 in our example. Then connect each node to this IP address using the icm ssh command, as follows:

    $ icm ssh -command "weave connect --replace 54.191.233.2"
    Executing command 'weave connect 54.191.233.2' on host ANDY-DATA-TEST-0001...
    Executing command 'weave connect 54.191.233.2' on host ANDY-DATA-TEST-0002...
    Executing command 'weave connect 54.191.233.2' on host ANDY-DATA-TEST-0003...
    ...executed on ANDY-DATA-TEST-0001
    ...executed on ANDY-DATA-TEST-0002
    ...executed on ANDY-DATA-TEST-0003
    

Correcting Time Skew

If the system time within the ICM containers differs from Standard Time by more than a few minutes, the various cloud providers may reject requests from ICM. This can happen if the container is unable to reach an NTP server on startup (initial or after being stopped or paused). The error appears in the terraform.err file as some variation on the following:

Error refreshing state: 1 error(s) occurred:

    # icm provision
    Error: Thread exited with value 1
    Signature expired: 20170504T170025Z is now earlier than 20170504T171441Z (20170504T172941Z   15 min.)
    status code: 403, request id: 41f1c4c3-30ef-11e7-afcb-3d4015da6526 doesn’t run for a period of time

The solution is to manually run NTP, for example:

ntpd -nqp pool.ntp.org

and verify that the time is now correct. (See also the discussion of the --cap-add option in Launch ICM.)

Timeouts Under ICM

When the target system is under extreme load, various operations in ICM may time out. Many of these timeouts are not under direct ICM control (for example, from cloud providers); other operations are retried several times, for example SSH and JDBC connections.

SSH timeouts are sometimes not identified as such. For instance, in the following example, an SSH timeout manifests as a generic exception from the underlying library:

# icm cp -localPath foo.txt -remotePath /tmp/
2017-03-28 18:40:19 ERROR Docker:324 - Error: 
java.io.IOException: com.jcraft.jsch.JSchException: channel is not opened. 
2017-03-28 18:40:19 ERROR Docker:24 - java.lang.Exception: Errors occurred during execution; aborting operation 
        at com.intersystems.tbd.provision.SSH.sshCommand(SSH.java:419) 
        at com.intersystems.tbd.provision.Provision.execute(Provision.java:173) 
        at com.intersystems.tbd.provision.Main.main(Main.java:22)

In this case the recommended course of action is to retry the operation (after identifying and resolving its proximate cause).

Note that for security reasons ICM sets the default SSH timeout for idle sessions at ten minutes (60 seconds x 10 retries). These values can be changed by modifying the following fields in the/etc/ssh/sshd_config file:

ClientAliveInterval 60
ClientAliveCountMax 10

Docker Bridge Network IP Address Range Conflict

For container networking, Docker uses a bridge network (see Use bridge networksOpens in a new tab in the Docker documentation) on subnet 172.17.0.0/16 by default. If this subnet is already in use on your network, collisions may occur that prevent Docker from starting up or prevent you from being able to reach your deployed host nodes. This problem can arise on the machine hosting your ICM container, your InterSystems IRIS cluster nodes, or both.

To resolve this, you can edit the bridge network’s IP configuration in the Docker configuration file to reassign the subnet to a range that is not in conflict with your own IP addresses (your IT department can help you determine this value). To make this change, add a line like the following to the Docker daemon configuration file:

"bip": "192.168.0.1/24"

If the problem arises with the ICM container, edit the file /etc/docker/daemon.json on the container’s host. If the problem arises with the host nodes in a deployed configuration, edit the file /ICM/etc/toHost/daemon.json in the ICM container; by default this file contains the value in the preceding example, which is likely to avoid problems with any deployment type except PreExisting.

Detailed information about the contents of the daemon.json file can be found in Daemon configuration fileOpens in a new tab in the Docker documentation; see also Configure and troubleshoot the Docker daemonOpens in a new tab.

Weave Network IP Address Range Conflict

By default, the Weave network uses IP address range 10.32.0.0/12. If this conflicts with an existing network, you may see an error such as the following in log file installWeave.log:

Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host
ERROR: Default --ipalloc-range 10.32.0.0/12 overlaps with existing route on host.
You must pick another range and set it on all hosts.

This is most likely to occur with provider PreExisting if the machines provided have undergone custom network configuration to support other software or local policies. If disabling or moving the other network is not an option, you can change the Weave configuration instead, using the following procedure:

  1. Edit the following file local to the ICM container:

    /ICM/etc/toHost/installWeave.sh
    
  2. Find the line containing the string weave launch. If you're confident there is no danger of overlap between Weave and the existing network, you can force Weave to continue use the default range by adding the underscored text in the following:

    sudo /usr/local/bin/weave launch --ipalloc-range 10.32.0.0/12 --password $2 
    

    You can also simply move Weave to another private network, as follows:

    sudo /usr/local/bin/weave launch --ipalloc-range 172.30.0.0/16 --password $2
    
  3. Save the file.

  4. Reprovision the cluster.

Huge Pages

On certain architectures you may see an error similar to the following in the InterSystems IRIS messages log:

0 Automatically configuring buffers 
1 Insufficient privileges to allocate Huge Pages; nonroot instance requires CAP_IPC_LOCK capability for Huge Pages. 
2 Failed to allocate 1316MB shared memory using Huge Pages. Startup will retry with standard pages. If huge pages 
  are needed for performance, check the OS settings and consider marking them as required with the InterSystems IRIS 
  'memlock' configuration parameter.

This can be remedied by providing the following option to the icm run command:

-options "--cap-add IPC_LOCK"
FeedbackOpens in a new tab