docs.intersystems.com
Home / InterSystems Cloud Manager Guide / Sharing ICM Deployments

InterSystems Cloud Manager Guide
Sharing ICM Deployments
Previous section           Next section
InterSystems: The power behind what matters   
Search:  


There are a number of situations in which different users on different systems might want to use ICM to manage or interact with the same deployment. For example, one user may be responsible for provisioning the infrastructure, while another in a different location is responsible for application deployment and upgrades.
However, ICM deployment is defined by its input files and results in the generation of several output files. Without access to these state files in the ICM container from which the deployment was made, it is difficult for anyone to manage or monitor the deployment (including the original deployer, should those files be lost).
To aid in this task, ICM can be run in distributed management mode, in which it stores the deployment’s state files on a Consul cluster for access by other additional ICM containers. If distributed management mode is not used, state files can also be shared manually.
Sharing Deployments in Distributed Management Mode
ICM’s distributed management mode uses the Consul service discovery tool from Hashicorp to give multiple users in any networked locations management access to a single ICM deployment. This is done through the use of multiple ICM containers, each of which includes a Consul client clustered with one or more Consul servers storing the needed state files.
Distributed Management Mode Overview
The initial ICM container, used to provision the infrastructure, is called the primary ICM container (or just the “primary ICM”). During the provisioning phase, the primary ICM does the following:
When a user executes the provided docker run command, a secondary ICM container (or “secondary ICM”) is created, and an interactive container session is started in the provider-appropriate directory (for example, /Samples/GCP). The secondary ICM automatically pulls the deployment’s state files from the Consul cluster at the start of every ICM command, so it always has the latest information. This creates a container that for all intents and purposes is a duplicate of the primary ICM container, with the one exception that it cannot provision or unprovision infrastructure. All other ICM commands are valid.
Configuring Distributed Management Mode
To create the primary ICM container and the Consul cluster, do the following:
  1. Add the ConsulServers field to the defaults.json file to specify the number of Consul servers:
    “ConsulServers”: “3”
    Possible values are 1, 3, and 5. A single Consul server represents a single point of failure and thus is not recommended. A five-server cluster is more reliable than a three-server cluster, but incurs greater cost.
  2. Include a CN node definition in the definitions.json file specifying at least as many CN nodes as the value of the ConsulServers field, for example:
    {
         "Role": "CN",
         "Count": "3",
         "StartCount": "7",
         "InstanceType": "t2.small"
    }
    
  3. Add the consul.sh script in the ICM container to the docker run command for the primary ICM, as follows:
    docker run --name primaryICM -it --cap-add SYS_TIME intersystems/icm:stable consul.sh
When you issue the icm provision command on the primary ICM command line, a Consul server is deployed on each CN node as it is provisioned until the specified number of servers is reached. When the command concludes successfully, the primary ICM pushes the state files to the Consul cluster, and its output includes the secondary ICM creation command. When you subsequently issue any command in the primary ICM that might alter the instances.json file, such as icm run or icm upgrade, the primary ICM pushes the new file to the Consul cluster. When you use the icm unprovision command in the primary ICM to unprovision the deployment, its state files are removed from the Consul cluster.
The icm run command for the secondary ICM provided in output by icm provision includes three arguments to the consul.sh script, as follows:
For example:
docker run -it --name ICM --cap-add SYS_TIME intersystems/icm:stable consul.sh \
  qQ6MPKCH1YzTb0j9Yst33w== 104.196.151.243 47d2b28c-b978-44d3-8126-aeef1a33eb80
You can use the secondary ICM creation command as many times as you wish, in any location that has network access to the deployment.
In both primary and secondary ICM containers, the consul members command can be used to display information about the Consul cluster, for example:
/Samples/GCP # consul members
Node                                  Address               Status  Type    Build  Protocol DC  Segment
consul-ACME-CN-TEST-0002.weave.local  104.196.151.243:8301  failed  server  1.1.0  2        dc1 <all>
consul-ACME-CN-TEST-0003.weave.local  35.196.254.13:8301    alive   server  1.1.0  2        dc1 <all>
consul-ACME-CN-TEST-0004.weave.local  35.196.128.118:8301   alive   server  1.1.0  2        dc1 <all>
3be7366b4495                          172.17.0.4:8301       alive   client  1.1.0  2        dc1 <default>
e0e87449a610                          172.17.0.3:8301       alive   client  1.1.0  2        dc1 <default>
Consul containers are also included in the output of the icm ps command, as shown in the following:
Samples/GCP # icm ps
Pulling from consul cluster...
CurrentWorkingDirectory: /Samples/GCP
...pulled from consul cluster
Machine            IP Address       Container           Status   Health   Image
-------            ----------       ---------           ------   ------   -----
ACME-DM-TEST-0001  35.227.32.29     weave               Up                weaveworks/weave:2.3.0
ACME-DM-TEST-0001  35.227.32.29     weavevolumes-2.3.0  Created           weaveworks/weaveexec:2.3.0
ACME-DM-TEST-0001  35.227.32.29     weavedb             Created           weaveworks/weavedb:latest
ACME-CN-TEST-0004  35.196.128.118   consul              Up                consul:1.1.0
ACME-CN-TEST-0004  35.196.128.118   weave               Up                weaveworks/weave:2.3.0
ACME-CN-TEST-0004  35.196.128.118   weavevolumes-2.3.0  Created           weaveworks/weaveexec:2.3.0
ACME-CN-TEST-0004  35.196.128.118   weavedb             Created           weaveworks/weavedb:latest
ACME-CN-TEST-0002  104.196.151.243  consul              Up                consul:1.1.0
ACME-CN-TEST-0002  104.196.151.243  weave               Up                weaveworks/weave:2.3.0
ACME-CN-TEST-0002  104.196.151.243  weavevolumes-2.3.0  Created           weaveworks/weaveexec:2.3.0
ACME-CN-TEST-0002  104.196.151.243  weavedb             Created           weaveworks/weavedb:latest
ACME-CN-TEST-0003  35.196.254.13    consul              Up                consul:1.1.0
ACME-CN-TEST-0003  35.196.254.13    weave               Up                weaveworks/weave:2.3.0
ACME-CN-TEST-0003  35.196.254.13    weavevolumes-2.3.0  Created           weaveworks/weaveexec:2.3.0
ACME-CN-TEST-0003  35.196.254.13    weavedb             Created           weaveworks/weavedb:latest
Note:
Because no concurrency control is applied to ICM commands, simultaneous conflicting commands issued in different ICM containers cannot all succeed; the results are based on timing and may include errors. For example, suppose two users in different containers simultaneously issue the command icm rm -machine ACME-DM-TEST-0001. One user will see this:
Removing container iris on ACME-DM-TEST-0001...
...removed container iris on ACME-DM-TEST-0001
while the other will see the following:
Removing container iris on ACME-DM-TEST-0001...
Error: No such container: iris
However, when no conflict exists, the same command can be run simultaneously without errors, for example icm rm -machine ACME-DM-TEST-0001 and icm rm -container customsensors -machine ACME-DM-TEST-0001.
Sharing Deployments Manually
This section explains how to share ICM deployments manually, describing which state files are required to share a deployment, methods for accessing them from outside the container, and how to persist those files so an ICM-driven deployment can be shared with other users or accessed from another location.
State Files
The state files are read from and written to the current working directory, though all of them can be overridden to use a custom name and location. Input files are as follows:
Any security keys, InterSystems IRIS™ licenses, or other files referenced from within these configuration files should be considered input as well.
Output files are as follows:
The layout of the files under ICM-GUID/ is as follows:
definition 0/
definition 1/
...
definition N/
Under each definition directory are the following files:
A variety of log files, temporary files, and other files appear in this hierarchy as well, but they are not required for sharing a deployment.
Note:
For provider PreExisting, no Terraform files are generated.
Maintaining Immutability
InterSystems recommends that you avoid generating state files local to the ICM container, for the following reasons:
A better practice is to mount a directory from the host within the ICM container to use as your working directory; that way all changes within the container are always available on the host. This can be accomplished using the Docker --volume option when the ICM container is first created, as follows:
$ docker run it -cap-add SYS_TIME --volume <host_path>:<container_path> <image>
Overall, you would take these steps:
  1. Stage input files on the host in host_path.
  2. Create, start, and attach to ICM container.
  3. Navigate to container_path.
  4. Issue ICM commands.
  5. Exit or detach from ICM container.
The state files (both input and output) are then present in host_path. See the sample script in Launch ICM for an example of this approach.
Persisting State Files
Methods of preserving and sharing state files with others include:
The advantage of the latter three methods is that they allow others to modify the deployment. Note however that ICM does not support simultaneous operations issued from more than one ICM container at a time, so a policy ensuring exclusive read-write access would need to be enforced.


Previous section           Next section
View this book as PDF   |  Download all PDFs
Copyright © 1997-2019 InterSystems Corporation, Cambridge, MA
Content Date/Time: 2019-04-10 14:45:56