docs.intersystems.com
Home / InterSystems Cloud Manager Guide / Using ICM

InterSystems Cloud Manager Guide
Using ICM
Previous section           Next section
InterSystems: The power behind what matters   
Search:  


This chapter explains how to use ICM to deploy an InterSystems IRIS™ configuration in a public cloud, as follows. The sections that follow explain the steps involved in using ICM to deploy a sample InterSystems IRIS configuration on AWS, as follows:
For comprehensive lists of the ICM commands and command-line options covered in detail in the following sections, see ICM Commands and Options.
ICM Use Cases
This chapter is focused on two typical ICM use cases, deploying the following two InterSystems IRIS configurations:
Most of the steps in the deployment process are the same for both configurations. The primary difference lies in the definitions files; see Define the Deployment for detailed contents. Output shown for the provisioning phase (see The icm provision Command) is from the distributed cache cluster; output shown for the deployment phase (see Deploy and Manage Services) is for the sharded cluster.
Launch ICM
ICM is provided as a Docker image. Everything required by ICM to carry out its provisioning, deployment, and management tasks — for example Terraform, the Docker client, and templates for the configuration files — is included in the ICM container. Therefore the only requirement for the Linux, macOS or Microsoft Windows system on which you launch ICM is that Docker is installed.
Important:
ICM is supported on Docker Enterprise Edition and Community Edition version 18.09 and later; Enterprise Edition only is supported for production environments.
Not all combinations of platform and Docker version are supported by Docker; for detailed information from Docker on compatibility, see the Compatibility Matrix and About Docker CE.
Identifying the Repository and Image
To download and run the ICM image, and to deploy the InterSystems IRIS container and others in the cloud using ICM, you need to identify the repository in which these InterSystems images are located and the credentials you need to log into that repository. The repository must be accessible from the internet (that is, not behind a firewall) in order for the cloud provider to download images.
InterSystems images are distributed as Docker tar archive files, available in the InterSystems Worldwide Response Center (WRC) download area. Your enterprise may have already have added these images to its Docker repository; in this case, you should get the location of the repository and the needed credentials from the appropriate IT administrator. If your enterprise has a Docker repository but has not yet added the InterSystems images, get the location of the repository and the needed credentials, obtain the tar archive files containing the ICM and InterSystems IRIS images from the WRC, and add each of them to the repository using the following steps on the command line:
  1. For example:
    docker tag docker.intersystems.com/intersystems/icm:2018.1.0.583 acme/icm:2018.1.0.583
  2. For example:
    docker login docker.acme.com
    Username: gsanchez@acme.com
    Pasword: **********
  3. For example:
    docker push acme/icm:2018.1.0.583 
If your organization does not have an internet accessible Docker repository, you can use the free (or extremely low cost) Docker Hub for your testing.
Running the ICM Container
To launch ICM from the command line on a system on which Docker is installed, use the docker run command (which actually combines three separate Docker commands) to do the following:
For example:
docker run --name icm -it --cap-add SYS_TIME intersystems/icm:2018.1.0.583
The -i option makes the command interactive and the -t option opens a pseudo-TTY, giving you command line access to the container. From this point on, you can interact with ICM by invoking ICM commands on the pseudo-TTY command line. The --cap-add SYS_TIME option allows the container to interact with the clock on the host system, avoiding clock skew that may cause the cloud service provider to reject API commands.
The ICM container includes a /Samples directory that provides you with samples of the elements required by ICM for provisioning, configuration, and deployment. The /Samples directory makes it easy for you to provision and deploy using ICM out of the box. Eventually, you can use locations outside the container to store these elements and InterSystems IRIS licenses, and either mount those locations as external volumes when you launch ICM or copy files into the CIM container using the docker cp command.
Of course, the ICM image can also be run by custom tools and scripts, and this can help you accomplish goals such as making these external locations available within the container, and saving your configuration files and your state directory (which is required to remove the infrastructure and services you provision) to persistent storage the container as well. A script, for example, could do the latter by capturing the current working directory in a variable and using it to mount that directory as a storage volume when running the ICM container, as follows:
#!/bin/bash
clear

# extract the basename of the full pwd path
MOUNT=$(basename $(pwd))
docker run --name icm -it --volume $PWD:$MOUNT --cap-add SYS_TIME intersystems/icm:stable 
printf "\nExited icm container\n"
printf "\nRemoving icm container...\nContainer removed:  "
docker rm icm
You can mount multiple external storage volumes when running the ICM container (or any other). For detailed information about the InterSystems IRIS feature that lets you store instance-specific data outside the container, see Durable %SYS for Persistent Instance Data in Running InterSystems IRIS in Containers.
Note:
On a Windows host, you must enable the local drive on which the directory you want to mount as a volume is located using the Shared Drives option on the Docker Settings ... menu; see Using InterSystems IRIS Containers with Docker for Windows on InterSystems Developer Community for more information.
Important:
When an error occurs during an ICM operation, ICM displays a message directing you to the log file in which information about the error can be found. Before beginning an ICM deployment, familiarize yourself with the log files and their locations as described in Log Files and Other ICM Files.
Obtain Security-Related Files
ICM communicates securely with the cloud provider on which it provisions the infrastructure, with the operating system of each provisioned node, and with Docker and several InterSystems IRIS services following container deployment. Before defining your deployment, you must obtain the credentials and other files needed to enable secure communication.
Cloud Provider Credentials
To use ICM with one of the public cloud platforms, you must create an account and download administrative credentials. To do this, follow the instructions provided by the cloud provider; you can also find information about how to download your credentials once your account exists in the Provider-Specific Parameters section. In the ICM configuration files, you identify the location of these credentials using the Credentials parameter.
When using ICM with a vSphere private cloud, you can use an existing account with the needed privileges, or create a new one. You specify these using the Username and Password fields.
SSH and SSL/TLS Keys
ICM uses SSH to provide secure access to the operating system of provisioned nodes, and SSL/TLS to establish secure connections to Docker, InterSystems Web Gateway, JDBC, and mirrored InterSystems IRIS databases. The locations of the files needed to enable this secure communication are specified using several ICM parameters, including:
You can create these files, either for use with ICM, or to review them in order to understand which are needed, using two scripts provided with ICM, located in the directory /ICM/bin in the ICM container. The keygenSSH.sh script creates the needed SSH files and places them in the directory /Samples/ssh in the ICM container. The keygenTLS.sh script creates the needed SSL/TLS files and places them in /Samples/tls. You can then specify these locations when defining your deployment, or obtain your own files based on the contents of these directories.
For more information about the security files required by ICM and generated by the keygen* scripts, see ICM Security and Security-Related Parameters in the “ICM Reference” chapter.
Important:
The keys generated by these scripts, as well as your cloud provider credentials, must be fully secured, as they provide full access to any ICM deployments in which they are used.
The keys by the keygen* scripts are intended as a convenience for your use in your initial test deployments. (Some have strings specific to InterSystems Corporation.) In production, the needed keys should be generated or obtained in keeping with your company's security policies.
Define the Deployment
To provide the needed parameters to ICM, you must select values for a number of fields, based on your goal and circumstances, and then incorporate these into the defaults and definitions files to be used for your deployment. (See Configuration, State and Log Files for information about these files.) You can begin with the template defaults.json and definitions.json files provided in the ICM container in the /Samples directory tree, for example /Samples/AWS.
The difference in the deployment processes for the two target configurations described at the start of this chapter lies primarily in their separate definitions files. Field values they can share are included in a joint defaults file, while the nodes that must be provisioned and configured for each deployment are specified in their definitions files.
The following sections provide the content of both the shared defaults file and the separate definitions files. Each field/value pair is shown as it would appear in the configuration file. Note that the fields included here do not represent an exhaustive list of all applicable fields; see ICM Configuration Parameters for information about these and all ICM fields.
In general, which fields are included in defaults.json and which in definitions.json depends on your needs, but as noted in Configuration, State and Log Files, defaults.json is often used to provide values for multiple deployments in a particular category, for example those that are provisioned on the same platform, while definitions.json provides values for a particular deployment.
In this case, because both deployments are on AWS, if we assume that they also use the same credentials, deploy the same InterSystems IRIS image, and include Weave Scope monitoring, those fields can be included in defaults.json, while the fields that differ go in definitions.json. The tables are ordered to reflect this approach. As previously noted, some fields, such as Provider, must appear in the defaults files. The separate definitions files can be specified in an option on the provisioning command line, or the appropriate file can be swapped in as definitions.json in the current working directory before the icm provision command is executed (see Provision the Infrastructure).
For more information about the parameters listed in these tables, see ICM Configuration Parameters.
Note:
InterSystems IRIS is already installed in the InterSystems container (or the container you created based on it) deployed by ICM, and ICM automatically configures InterSystems IRIS after it is deployed, as needed for the role of each instance. There are, however, a number of ICM fields you can use to configure the InterSystems IRIS instances differently from the InterSystems IRIS and ICM defaults, for example the ISCglobals field used to configure the database caches of the instances defined in the sample Sharded Cluster Definitions File. Information about InterSystems IRIS configuration settings, their effects, and their installed defaults is provided in the Installation Guide, the System Administration Guide, and the InterSystems Parameter File Reference.
Shared Defaults File
The field/value pairs shown in the following table represent the contents of a defaults.json file that can be used for both the distributed cache cluster deployment and the sharded cluster deployment.
Note:
The pathnames provided in the fields specifying security files in this sample defaults file assume you have placed your AWS credentials in the /Samples/AWS directory, and used the keygen*.sh scripts to generate the keys, as described in Obtain Security-Related Files. If you have generated or obtained your own keys, these may be internal paths to external storage volumes mounted when the ICM container is run, as described in the Launch ICM. See ICM Security and Security-Related Parameters for additional information about these files.
Requirement Definition Field: Value
Provisioning platform
Platform details
Amazon Web Services, required details; see About AWS Regions and Availability Zones.
.
"Provider": "AWS"
"Zone": "us-west-1c"
"Region": "us-west-1"
Default machine image and sizing
Default AWS AMI and instance type to provision. Default size of data storage volume (overriding ICM default). See .About AWS AMIs and About AWS instance types.
"AMI": "ami-18726478"
"InstanceType": "m4.large"
"DataVolumeSize": "20"
SSHUser
Nonroot account with sudo access used by ICM for access to provisioned nodes.
"SSHUser": "ec2-user"
(For AWS, the required value depends on the AMI; see Provider-Specific Parameters for more information.)
Locations of security files
Needed credentials and key files; because provider is AWS, the SSH2–format public key in /Samples/ssh/ is specified.
"Credentials": "/Samples/AWS/credentials"
"SSHPublicKey": "/Samples/ssh/secure-ssh2.pub"
"SSHPrivateKey": "/Samples/ssh/secure-ssh2"
"TLSKeyDir": "/Samples/tls/"
Location of InterSystems IRIS license keys License keys to be served to the InterSystems IRIS instances deployed on the provisioned nodes. “LicenseDir”: “/Samples/Licenses”
Monitoring
Install Weave Scope, optionally provide authentication credentials; see Monitoring in ICM in the “ICM Reference” chapter.
"Monitor": "scope"
“ProxyImage”: "intersystems/https-proxy-auth:stable"
“MonitorUsername”: "..."
“MonitorPassword”: "..."
Image to deploy, repository credentials, version of Docker to install on nodes
Latest InterSystems IRIS image, credentials to log into Docker repository, version of Docker to install; see Identifying the Repository and Image and General Parameters.
"DockerImage": "intersystems/iris:stable"
"DockerUsername": "..."
"DockerPassword": "..."
"DockerVersion": "ce-18.09.1.ce"
Naming scheme for provisioned nodes
ACME-role-TEST-NNNN
"Label": "ACME"
"Tag": "TEST"
InterSystems IRIS settings
Password for deployed instances.
Password is specified on the deployment command line (see Deploy and Manage Services) to avoid displaying the password in a configuration file
Distributed Cache Cluster Definitions File
The definitions.json file for the distributed cache cluster must define the following nodes:
This configuration is illustrated in the following:
Distributed Cache Cluster to be Deployed by ICM
In addition, the Mirror field must be set to True in the definitions file, as it is not used in the sharded cluster deployment and thus cannot appear in the defaults file.
The table that follows lists the field/value pairs that are required for this configuration.
Requirement Definition Field: Value
Mirroring
If Mirror is True, when two DM nodes are defined, they are mirrored.
"Mirror": "true"
Nodes for target InterSystems IRIS configuration, including provider-specific characteristics
Two data servers using a standard InterSystems IRIS license. Instance type and data volume size override defaults file (see Shared Defaults File);. Automatically configured as a mirror due to Mirror setting (previous row).
"Role": "DM"
"Count": "2"
"LicenseKey": "standard-iris.key”
"InstanceType": "m4.xlarge"
"OSVolumeSize": "32"
"DataVolumeSize": "15"
Three load-balanced application servers using a standard InterSystems IRIS license, naming starts at 0003. Load balancer is provisioned automatically when LoadBalancer is True.
"Role": "AM"
"Count": "3"
"StartCount": "3"
"LicenseKey": "standard-iris.key”
"LoadBalancer": "true"
Arbiter node for data server mirror. Use of arbiter Docker image overrides iris image specified in defaults file. No license needed, naming is 0006. Instance type overrides defaults file. "Role": "AR"
"Count": "1"
"StartCount": "6"
"DockerImage": "intersystems/arbiter:stable"
"InstanceType": "t2.small"
Sharded Cluster Definitions File
The definitions.json file for the sharded cluster configuration must define the following nodes:
These are illustrated in the following:
Sharded Cluster to be Deployed by ICM
The table that follows lists the field/value pairs that are required for this configuration.
Requirement Definition Field: Value
Nodes for target InterSystems IRIS configuration, including provider-specific characteristics
Shard master data server using an InterSystems IRIS sharding license. Instance type overrides defaults file due to database cache requirements. Database cache size specified (8KB block size). Data volume size overrides defaults file.
"Role": "DM"
"LicenseKey": "sharding-iris.key”
"InstanceType": "m4.4xlarge"
"ISCglobals": "0,0,40,960,0,0,0"
DataVolumeSize": "60"
+16178187401@mymetropcs.com
"Role": "DS"
"Count": "4"
StartCount”: “2”
"LicenseKey": "sharding-iris.key”
"InstanceType": "m4.10xlarge"
"ISCglobals": "0,0,143360,0,0,0"
"DataVolumeSize": "115"
Note:
For more detailed information about the specifics of deploying a sharded cluster, such as database cache size and data volume size requirements, see Deploying the Sharded Cluster in the “Horizontally Scaling InterSystems IRIS for Data Volume with Sharding” chapter of the Scalability Guide.
Provision the Infrastructure
ICM provisions cloud infrastructure using the HashiCorp Terraform tool.
Note:
ICM gives you the option of provisioning your own existing cloud or virtual compute nodes or physical servers to deploy containers on; see Deploying on a Preexisting Cluster for more information.
The icm provision Command
The icm provision command allocates and configures compute nodes, using the field values provided in the definitions.json and defaults.json files, as well as default values for unspecified parameters where applicable. By default, the input files in the current working directory are used; you can specify another location using the -definitions and -defaults options. In the case of the separate definitions files for the two target configurations (see Define the Deployment), the appropriate file can be swapped in as definitions.json in the current working directory before the icm provision command is executed.
Note:
If you use the -definitions or -defaults options to specify a nondefault location for one or both of these configuration files, you must also do so for all subsequent ICM commands you run for this deployment. For example, if you execute icm provision -defaults ./config_files, you must add -defaults ./config_files to all subsequent commands you issue for that deployment.
While the provisioning operation is ongoing, ICM provides status messages regarding the plan phase (the Terraform phase that validates the desired infrastructure and generates state files) and the apply phase (the Terraform phase that accesses the cloud provider, carries out allocation of the machines, and updates state files). Because ICM runs Terraform in multiple threads, the order in which machines are provisioned and in which additional actions applied to them is not deterministic. This is illustrated in the sample output that follows.
At completion, ICM also provides a summary of the compute nodes and associated components that have been provisioned, and outputs a command line which can be used to delete the infrastructure at a later date.
Important:
Unprovisioning public cloud compute nodes in a timely manner avoids unnecessary expense. Because the -stateDir option to the icm unprovision command is mandatory, you may find it convenient to copy the icm unprovision command provided in the output, so you can easily replicate it when unprovisioning. This output also appears in the icm.log file.
The following example if excerpted from the output of provisioning of the distributed cache cluster described in Define the Deployment.
$ icm provision -definitions definitions_DCC.json
Starting init of ACME-TEST...
...completed init of ACME-TEST
Starting plan of ACME-DM-TEST...
...
Starting refresh of ACME-TEST...

...
Starting apply of ACME-DM-TEST...
...
Copying files to ACME-DM-TEST-0002...
...
Configuring ACME-AM-TEST-0003...
...
Mounting volumes on ACME-AM-TEST-0004...
...
Installing Docker on ACME-AM-TEST-0003...
...
Installing Weave Net on ACME-DM-TEST-0001...
...
Collecting Weave info for ACME-AR-TEST-0006...
...
...collected Weave info for ACME-AM-TEST-0005
...installed Weave Net on ACME-AM-TEST-0004

Machine            IP Address       DNS Name                      
-------             ---------        -------                      
ACME-DM-TEST-0001   00.53.183.209    ec2-00-53-183-209.us-west-1.compute.amazonaws.com
ACME-DM-TEST-0002   00.53.183.185    ec2-00-53-183-185.us-west-1.compute.amazonaws.com
ACME-AM-TEST-0003   00.56.59.42      ec2-00-56-59-42.us-west-1.compute.amazonaws.com
ACME-AM-TEST-0005   00.67.1.11       ec2-00-67-1-11.us-west-1.compute.amazonaws.com
ACME-AM-TEST-0003   00.193.117.217   ec2-00-193-117-217.us-west-1.compute.amazonaws.com
ACME-LB-TEST-0002   (virtual AM)     ACME-AM-TEST-1546467861.amazonaws.com
ACME-AR-TEST-0006   00.53.201.194    ec2-00-53-201-194.us-west-1.compute.amazonaws.com
To destroy: icm unprovision -stateDir /Samples/AWS/ICM-8620265620732464265 [-cleanUp] [-force]

During the provisioning operation, ICM creates or updates state and log files in the state directory (created by ICM, with a name beginning with ICM-) and when finished creates the instances.json file, which serves as input to subsequent deployment and management commands. (See The Instances File in the chapter “Essential ICM Elements” for more information about this file.) By default, the instances file is created in the current working directory; you can change this using the -instances option, but note that if you do you must supply the alternate location by using the -instances option with all subsequent commands.
Because interactions with cloud providers sometimes involve high latency leading to timeouts and internal errors on the provider side, the icm provision command is fully reentrant to make the provisioning process as resilient as possible. If errors are encountered during provisioning, the command can be issued multiple times until ICM completes all the required tasks for all the specified nodes without error. When this happens, however, you must use the -stateDir option to specify the state directory (see The State Directory and State Files) in your repeated execution of the command, to indicate that provisioning is already in process and provide the needed information about what has been done and what hasn’t. For example, suppose you encounter the problem in the following example:
$ icm provision
Starting plan of ACME-DM-TEST...
...completed plan of ACME-DM-TEST
Starting apply of ACME-AM-TEST...
Error: Thread exited with value 1
See /Samples/AWS/ICM-1105110161490759817/Sample-DS-TEST/terraform.err
To reprovision, specify --stateDir=/Samples/AWS/ICM-3078941885014382438
Review the indicated errors, fix as needed, then run icm provision again with the -stateDir option, as in the following
$ icm provision --stateDir=/Samples/AWS/ICM-3078941885014382438
Starting plan of ACME-DM-TEST...
...completed plan of ACME-DM-TEST
Starting apply of ACME-DM-TEST...
...completed apply of ACME-DM-TEST
[...]
To destroy: icm unprovision -stateDir /tmp/ICM-3078941885014382438 [-cleanUp] [-force]
Even when provisioning is successful, you can run the icm provision command again (with the -stateDir option) after making changes to your configuration files to alter the provisioned infrastructure. For example, you can change storage volume sizes for some of the nodes in the definitions file and execute icm provision again to reprovision with the new sizes.
By default, when issuing the icm provision command to modify existing infrastructure, ICM prompts you to confirm; you can avoid this by using the -force option, for example when using a script.
Infrastructure Management Commands
The commands in this section are used to manage the infrastructure you have provisioned using ICM.
icm inventory
The icm inventory command lists the provisioned nodes, as at the end of the provisioning output, based on the information in the instances.json file (see The Instances File in the chapter “Essential ICM Elements”). For example:
$ icm inventory
Machine            IP Address       DNS Name                      
-------            ----------       --------                      
ACME-DM-TEST-0001   00.53.183.209-   ec2-52-53-183-209.us-west-1.compute.amazonaws.com
ACME-DM-TEST-0002   00.53.183.185+   ec2-52-53-183-185.us-west-1.compute.amazonaws.com
ACME-AM-TEST-0003   00.56.59.42      ec2-13-56-59-42.us-west-1.compute.amazonaws.com
ACME-AM-TEST-0005   00.67.1.11       ec2-54-67-1-11.us-west-1.compute.amazonaws.com
ACME-AM-TEST-0003   00.193.117.217   ec2-54-193-117-217.us-west-1.compute.amazonaws.com
ACME-LB-TEST-0002   (virtual AM)     ACME-AM-TEST-1546467861.amazonaws.com
ACME-AR-TEST-0006   00.53.201.194    ec2-52-53-201-194.us-west-1.compute.amazonaws.com
You can also use the -machine or -role options to filter by node name or role, for example, with the same cluster as in the preceding example:
$ icm inventory -role AM
Machine            IP Address       DNS Name                      
-------            ----------       --------                      
ACME-AM-TEST-0003   00.56.59.42      ec2-13-56-59-42.us-west-1.compute.amazonaws.com
ACME-AM-TEST-0005   00.67.1.11       ec2-54-67-1-11.us-west-1.compute.amazonaws.com
ACME-AM-TEST-0003   00.193.117.217   ec2-54-193-117-217.us-west-1.compute.amazonaws.com
icm ssh
The icm ssh command runs an arbitrary command on the specified compute nodes. Because mixing output from multiple commands would be hard to interpret, the output is written to files and a list of output files provided, for example:
$ icm ssh -command "ping -c 5 intersystems.com" -role DM
Executing command 'ping -c 5 intersystems.com' on ACME-DM-TEST-0001...
Executing command 'ping -c 5 intersystems.com' on ACME-DM-TEST-0002...
...output in ./ICM-4780136574/ACME-DM-TEST/ACME-DM-TEST-0001/ssh.out
...output in ./ICM-4780136574/ACME-DM-TEST/ACME-DM-TEST-0002/ssh.out
However, when the -machine or -role options are used to specify exactly one node, as in the following, the output is also written to the console:
$ icm ssh -command "df -k" -machine ACME-DM-TEST-0001
Executing command 'df -k' on ACME-DM-TEST-0001...
...output in ./ICM-4780136574/ACME-DM-TEST/ACME-DM-TEST-0001/ssh.out 

Filesystem     1K-blocks    Used Available Use% Mounted on
rootfs          10474496 2205468   8269028  22% /
tmpfs            3874116       0   3874116   0% /dev
tmpfs            3874116       0   3874116   0% /sys/fs/cgroup
/dev/xvda2      33542124 3766604  29775520  12% /host
/dev/xvdb       10190100   36888   9612540   1% /irissys/data
/dev/xvdc       10190100   36888   9612540   1% /irissys/wij
/dev/xvdd       10190100   36888   9612540   1% /irissys/journal1
/dev/xvde       10190100   36888   9612540   1% /irissys/journal2
shm                65536     492     65044   1% /dev/shm
The icm ssh command can also be used in interactive mode to execute long-running, blocking, or interactive commands on a compute node. Unless the command is run on a single-node deployment, the -interactive flag must be accompanied by a -role or -machine option restricting the command to a single node. If the -command option is not used, the destination user's default shell (for example bash) is launched.
See icm exec for an example of running a command interactively.
Note:
Two commands described in Service Management Commands, icm exec (which runs an arbitrary command on the specified containers) and icm session (which opens an interactive session for the InterSystems IRIS instance on a specified node) can be grouped with icm ssh as a set of powerful tools for interacting with your ICM deployment.
icm scp
The icm scp command securely copies a file or directory from the local ICM container to the host OS of the specified node or nodes. The command syntax is as follows:
icm scp -localPath local-path [-remotePath remote-path]
Both localPath and remotePath can be either files or directories. If remotePath is a directory, it must contain a trailing forward slash (/), or it will be assumed to be a file. If both are directories, the contents of the local directory are recursively copied; if you want the directory itself to be copied, remove the trailing slash (/) from localPath.
The default for the optional remote-path argument is /home/ssh-user. The root directory of this path, /home, is the default home directory; to change it, specify a different root directory using the Home field. The user specified by the SSHUser field must have the needed permissions for remotePath.
Note:
See also the icm cp command, which copies a local file or directory on the specified node into the specified container.
Deploy and Manage Services
ICM carries out deployment of software services using Docker images, which it runs as containers by making calls to Docker. Containerized deployment using images supports ease of use and DevOps adaptation while avoiding the risks of manual upgrade. In addition to Docker, ICM also carries out some InterSystems IRIS-specific configuration over JDBC.
There are many container management and orchestration tools available, and these can be used to extend ICM’s deployment and management capabilities.
The icm run Command
The icm run command pulls, creates, and starts a container from the specified image on each of the provisioned nodes. By default, the image specified by the DockerImage field in the configuration files is used, and the name of the deployed container is iris. This name is reserved for and should be used only for containers created from the following InterSystems images (or images based on these images):
By including the DockerImage field in each node definition in the definitions.json file, you can run different InterSystems IRIS images on different node types. For example, you must do this to run the arbiter image on the AR node and the webgateway image on WS nodes while running the iris image on the other nodes.
Important:
When the DockerImage field specifies the iris or spark image in the defaults.json file and you include an AR or WS definition in the definitions.json file, you must use include the DockerImage field in the AR or WS definition to override the default and specify the appropriate image (arbiter or webgateway, respectively) and avoid configuration errors.
Docker images from InterSystems comply with the OCI support specification, and are supported on Docker Enterprise Edition and Community Edition 18.03 and later. The version of Docker installed on provisioned nodes by the ICM command can be specified using the DockerVersion parameter; for more information, see General Parameters.
You can also use the -image and -container command-line options with icm run to specify a different image and container name. This allows you to deploy multiple containers created from multiple images on each provisioned node by using the icm run command multiple times — the first time to run the images specified by the DockerImage fields in the node definitions and deploy the iris container (of which there can be only one) on each node, as described in the foregoing paragraphs, and one or more subsequent times with the -image and -container options to run a custom image on all of the nodes or some of the nodes. Each container running on a given node must have a unique name. The -machine and -role options can also be used to restrict container deployment to a particular node, or to nodes of a particular type, for example, when deploying your own custom container on a specific provisioned node.
Another frequently used option, -iscPassword, specifies the InterSystems IRIS password to set for all deployed InterSystems IRIS containers; this value could be included in the configuration files, but the command line option avoids committing a password to a plain-text record. If the InterSystems IRIS password is not provided by either method, ICM prompts for it (with typing masked).
Note:
For security, ICM never transmits the InterSystems IRIS password (however specified) in plain text, but instead generates a hashed password and salt locally, then sends these using SSH to the deployed InterSystems IRIS containers on the compute nodes.
Given all of the preceding, consider the following three examples of container deployment using the icm run command. (These do not present complete procedures, but are limited to the procedural elements relating to the deployment of particular containers on particular nodes.)
Bear in mind the following further considerations:
Additional Docker options, such as --volume, can be specified on the icm run command line using the -options option, for example:
icm run -options "--volume /shared:/host" image intersystems/iris:stable
For more information on the -options option, see Using ICM with Custom and Third-Party Containers.
The -command option can be used with icm run to provide arguments to (or in place of) the Docker entry point; for more information, see Overriding Default Commands.
Because ICM issues Docker commands in multiple threads, the order in which containers are deployed on nodes is not deterministic. This is illustrated in the example that follows.
Important:
Unlike the icm provision command, the icm run command cannot simply be repeated if it fails on one or more nodes. Generally speaking, there are two causes for deployment failures.
The following example represents output from deployment of the sharded cluster configuration described in Define the Deployment. Repetitive lines are omitted for brevity.
$ icm run -definitions definitions_cluster.json
Executing command 'docker login' on ACME-DM-TEST-0001...
...output in /Samples/AWS/ICM-8620265620732464265/Sample-DM-TEST/ACME-DM-TEST-0001/docker.out
...
Pulling image intersystems/iris:stable on SHARD-DM-TEST-0001...
...pulled SHARD-DM-TEST-0001 image intersystems/iris:stable
...
Creating container iris on ACME-DS-TEST-0002...
...
Copying license directory /Samples/license/ to ACME-AM-TEST-0003...
...
Starting container iris on ACME-DS-TEST-0004...
...
Waiting for InterSystems IRIS to start on ACME-DS-TEST-0002...
...
Configuring SSL on ACME-DM-TEST-0001...
...
Enabling ECP on ACME-DS-TEST-0003...
...
Setting System Mode on ACME-DS-TEST-0002...
...
Acquiring license on ACME-DS-TEST-0002...
...
Enabling shard service on ACME-DM-TEST-0001...
...
Assigning shards on ACME-DM-TEST-0001...
...
Configuring application server on ACME-AM-TEST-0003...
...
Management Portal available at: http://ec2-00-153-49-109.us-west-1.compute.amazonaws.com:52773/csp/sys/UtilHome.csp 
At completion, ICM outputs a link to the Management Portal of the appropriate InterSystems IRIS instance. In this case, the provided Management Portal link is for the shard master data server running in the InterSystems IRIS container on ACME-DM-TEST-001.
Container Management Commands
The commands in this section are used to manage the containers you have deployed on your provisioned infrastructure.
icm ps
When deployment is complete, the icm ps command shows you the run state of containers running on the nodes, for example:
$ icm ps -container iris
Machine              IP Address      Container    Status   Health    Image
-------              ----------      ---------    ------   ------    -----
ACME-DS-TEST-0004    00.56.140.23    iris         Up       healthy   intersystems/iris:stable
ACME-DS-TEST-0003    00.53.190.37    iris         Up       healthy   intersystems/iris:stable
ACME-DS-TEST-0002    00.67.116.202   iris         Up       healthy   intersystems/iris:stable
ACME-DM-TEST-0001    00.153.49.109   iris         Up       healthy   intersystems/iris:stable
If the -container restriction is omitted, all containers running on the nodes are listed. This includes both other containers deployed by ICM (for example, Weave network containers, or any custom or third party containers you deployed using the icm run command) and any deployed by other means after completion of the ICM deployment..
Beyond node name, IP address, container name, and the image the container was created from, the icm ps command includes the following columns:
Additional deployment and management phase commands are listed in the following. For complete information about these commands, see ICM Reference.
icm stop
The icm stop command stops the specified containers (or iris by default) on the specified nodes, or on all nodes if no machine or role constraints provided). For example, to stop the InterSystems IRIS containers on the application servers in the distributed cache cluster configuration:
$ icm stop -container iris -role DS

Stopping container iris on ACME-DS-TEST-0002...
Stopping container iris on ACME-DS-TEST-0004...
Stopping container iris on ACME-DS-TEST-0003...
...completed stop of container iris on ACME-DS-TEST-0004
...completed stop of container iris on ACME-DS-TEST-0002
...completed stop of container iris on ACME-DS-TEST-0003
icm start
The icm start command starts the specified containers (or iris by default) on the specified nodes, or on all nodes if no machine or role constraints provided). For example, to restart one of the stopped application server InterSystems IRIS containers:
$ icm start -container iris -machine ACME-DS-TEST-0002...
Starting container iris on ACME-DS-TEST-0002...
...completed start of container iris on ACME-DS-TEST-0002
icm pull
The icm pull command downloads the specified image to the specified machines. For example, to add an image to the shard master data server in the sharded cluster:
$ icm pull -image intersystems/webgateway:stable -role DM
Pulling ACME-DM-TEST-0001 image intersystems/webgateway:stable...
...pulled ACME-DM-TEST-0001 image intersystems/webgateway:stable

Note that the -image option is not required if the image you want to pull is the one specified by the DockerImage field in the definitions file, for example:
"DockerImage": "intersystems/iris:stable",
Although the icm run automatically command pulls any images not already present on the host, an explicit icm pull might be desirable for testing, staging, or other purposes.
icm rm
The icm rm command deletes the specified container (or iris by default), but not the image from which it was started, from the specified nodes, or from all nodes if no machine or role is specified. Only a stopped container can be deleted.
icm upgrade
The icm upgrade command replaces the specified container on the specified machines. ICM orchestrates the following sequence of events to carry out an upgrade:
  1. Pull the new image
  2. Create the new container
  3. Stop the existing container
  4. Remove the existing container
  5. Start the new container
By staging the new image in steps 1 and 2, the downtime required between steps 3-5 is kept relatively short.
For example, to upgrade the InterSystems IRIS container on the shard master data server:
$ icm upgrade -image intersystems/iris:latest -machine ACME-AM-TEST-0003
Pulling ACME-AM-TEST-0003 image intersystems/iris:latest...
...pulled ACME-AM-TEST-0003 image intersystems/iris:latest
Stopping container ACME-AM-TEST-0003...
...completed stop of container ACME-AM-TEST-0003
Removing container ACME-AM-TEST-0003...
...removed container ACME-AM-TEST-0003
Running image intersystems/iris:latest in container ACME-AM-TEST-0003...
...running image intersystems/iris:latest in container ACME-AM-TEST-0003
The -image option is required for the icm upgrade command. When the upgrade is complete, the value of the DockerImage field in the instances.json file (see The Instances File in the chapter “Essential ICM Elements”) is updated with the image you specified.
If you are upgrading a container other than iris, you must use the -container option to specify the container name.
Service Management Commands
These commands let you interact with the services running in your deployed containers, including InterSystems IRIS.
A significant feature of ICM is the ability it provides to interact with the nodes of your deployment on several levels — with the node itself, with the container deployed on it, and with the running InterSystems IRIS instance inside the container. The icm ssh (described in Infrastructure Management Commands), which lets you run a command on the specified compute nodes, can be grouped with the first two commands described in this section, icm exec (run a command in the specified containers) and icm session (open an interactive session for the InterSystems IRIS instance on a specified node) as a set of powerful tools for interacting with your ICM deployment. These multiple levels of interaction are shown in the following illustration.
Interactive ICM Commands
icm exec
The icm exec command runs an arbitrary command in the specified containers, for example
$ icm exec -command "df -k" -machine ACME-DM-TEST-0001
Executing command in container iris on ACME-DM-TEST-0001
...output in ./ICM-4780136574/ACME-DM-TEST/ACME-DM-TEST-0001/docker.out

Filesystem     1K-blocks    Used Available Use% Mounted on
rootfs          10474496 2205468   8269028  22% /
tmpfs            3874116       0   3874116   0% /dev
tmpfs            3874116       0   3874116   0% /sys/fs/cgroup
/dev/xvda2      33542124 3766604  29775520  12% /host
/dev/xvdb       10190100   36888   9612540   1% /irissys/data
/dev/xvdc       10190100   36888   9612540   1% /irissys/wij
/dev/xvdd       10190100   36888   9612540   1% /irissys/journal1
/dev/xvde       10190100   36888   9612540   1% /irissys/journal2
shm                65536     492     65044   1% /dev/shm
Because mixing output from multiple commands would be hard to interpret, when the command is executed on more than one node, the output is written to files and a list of output files provided.
Additional Docker options, such as --env, can be specified on the icm exec command line using the -options option; for more information on the -options option, see Using ICM with Custom and Third-Party Containers.
Because executing long-running, blocking, or interactive commands within a container can cause ICM to time out waiting for the command to complete or for user input, the icm exec command can also be used in interactive mode. Unless the command is run on a single-node deployment, the -interactive flag must be accompanied by a -role or -machine option restricting the command to a single node. A good example is running a shell in the container:
$ icm exec -command bash -machine ACME-AM-TEST-0004 -interactive
Executing command 'bash' in container iris on ACME-AM-TEST-0004...
[root@localhost /] $ whoami
root
[root@localhost /] $ hostname
iris-ACME-AM-TEST-0004
[root@localhost /] $ exit
Another example of a command to execute interactively within a container is an InterSystems IRIS command that prompts for user input, for example iris stop: which asks whether to broadcast a message before shutting down the InterSystems IRIS instance.
icm session
When used with the -interactive option, the icm session command opens an interactive session for the InterSystems IRIS instance on the node you specify. The -namespace option can be used to specify the namespace in which the session starts. For example:
$ icm session -interactive -machine ACME-AM-TEST-0003 -namespace %SYS

Node: iris-ACME-AM-TEST-0003, Instance: IRIS

Username: _SYSTEM
Password: ********
%SYS> 
You can also use the -command option to provide a routine to be run in the InterSystems IRIS session, for example:
icm session -interactive -machine ACME-AM-TEST-0003 -namespace %SYS -command ^MIRROR
Additional Docker options, such as --env, can be specified on the icm exec command line using the -options option; for more information on the -options option, see Using ICM with Custom and Third-Party Containers.
Without the -interactive option, the icm session command runs the InterSystems IRIS ObjectScriptScript snippet specified by the -command option on the specified node or nodes. The -namespace option can be used to specify the namespace in which the snippet runs. Because mixing output from multiple commands would be hard to interpret, when the command is executed on more than one node, the output is written to files and a list of output files provided. For example:
$ icm session -command 'Write ##class(%File).Exists("test.txt")' -role AM
Executing command in container iris on ACME-AM-TEST-0003...
Executing command in container iris on ACME-AM-TEST-0004...
Executing command in container iris on ACME-AM-TEST-0005...
...output in ./ICM-4780136574/ACME-DM-TEST/ACME-AM-TEST-0003/ssh.out
...output in ./ICM-4780136574/ACME-DM-TEST/ACME-AM-TEST-0004/ssh.out
...output in ./ICM-4780136574/ACME-DM-TEST/ACME-AM-TEST-0005/ssh.out
When the specified -machine or -role options limit the command to a single node, output is also written to the console, for example
$ icm session -command 'Write ##class(%File).Exists("test.txt")' -role DM
Executing command in container iris on ACME-DM-TEST-0001
...output in ./ICM-4780136574/ACME-DM-TEST/ACME-DM-TEST-0001/docker.out

0
icm cp
The icm cp command copies a local file or directory on the specified node into the specified container. The command syntax is as follows:
icm cp -localPath local-path [-remotePath remote-path]
Both localPath and remotePath can be either files or directories. If both are directories, the contents of the local directory are recursively copied; if you want the directory itself to be copied, include it in remotePath.
The remotePath argument is optional and if omitted defaults to /tmp; if remotePath is a directory, it must contain a trailing forward slash (/), or it will be assumed to be a file. You can use the -container option to copy to a container other than the default iris.
Note:
See also the icm scp command, which securely copies a file or directory from the local ICM container to the specified host OS.
icm sql
The icm sql command runs an arbitrary SQL command against the containerized InterSystems IRIS instance on the specified node (or all nodes), for example:
$ icm sql -command "SELECT Name,SMSGateway FROM %SYS.PhoneProviders" -role DM
Executing command in container iris on ACME-DM-TEST-0001...
...output in ./ICM-4780136574/ACME-DM-TEST/ACME-DM-TEST-0001/jdbc.out

Name,SMSGateway
AT&T Wireless,txt.att.net
Alltel,message.alltel.com
Cellular One,mobile.celloneusa.com
Nextel,messaging.nextel.com
Sprint PCS,messaging.sprintpcs.com
T-Mobile,tmomail.net
Verizon,vtext.com
The -namespace option can be used to specify the namespace in which the SQL command runs.
Because mixing output from multiple commands would be hard to interpret, when the command is executed on more than one node, the output is written to files and a list of output files provided.
icm docker
The icm docker command runs a Docker command on the specified node (or all nodes), for example:
$ icm docker -command "status --no-stream" -machine ACME-DM-TEST-0002
Executing command 'status --no-stream' on ACME-DM-TEST-0002...
...output in ./ICM-4780136574/ACME-DM-TEST/ACME-DM-TEST-0002/docker.out

CONTAINER     CPU %  MEM USAGE / LIMIT  MEM %  NET I/O     BLOCK I/O        PIDS
3e94c3b20340  0.01%  606.9MiB/7.389GiB  8.02%  5.6B/3.5kB  464.5MB/21.79MB  0
1952342e3b6b  0.10%  22.17MiB/7.389GiB  0.29%  0B/0B       13.72MB/0B       0
d3bb3f9a756c  0.10%  40.54MiB/7.389GiB  0.54%  0B/0B       38.43MB/0B       0
46b263cb3799  0.14%  56.61MiB/7.389GiB  0.75%  0B/0B       19.32MB/231.9kB  0
The Docker command should not be long-running (or block), otherwise control will not return to ICM. For example, if the ---no-stream option in the example is removed, the call will not return until a timeout has expired.
Unprovision the Infrastructure
Because public cloud platform instances continually generate charges and unused instances in private clouds consume resources to no purpose, it is important to unprovision infrastructure in a timely manner.
The icm unprovision command deallocates the provisioned infrastructure based on the state files created during provisioning. The -stateDir option is required. As described in Provision the Infrastructure, destroy refers to the Terraform phase that deallocates the infrastructure. One line is created for each entry in the definitions file, regardless of how many nodes of that type were provisioned. Because ICM runs Terraform in multiple threads, the order in which machines are unprovisioned is not deterministic.
$ icm unprovision -stateDir /Samples/AWS/ICM-2416821167214483124 -cleanUp
Type "yes" to confirm: yes
Starting destroy of ACME-DM-TEST...
Starting destroy of ACME-AM-TEST...
Starting destroy of ACME-AR-TEST...
...completed destroy of ACME-AR-TEST
...completed destroy of ACME-AM-TEST
...completed destroy of ACME-DM-TEST
Starting destroy of ACME-TEST...
...completed destroy of Binstock-TEST
The -cleanUp option deletes the state directory after unprovisioning; by default, the state directory is preserved. The icm unprovision command prompts you to confirm unprovisioning by default; you can use the -force option to avoid this, for example when using a script.


Previous section           Next section
View this book as PDF   |  Download all PDFs
Copyright © 1997-2019 InterSystems Corporation, Cambridge, MA
Content Date/Time: 2019-04-10 14:45:56