Skip to main content

Container Basics

Container Basics

This section covers the important basic elements of creating and using Docker containers.

Container Contents

In essence, a Docker container runs a single primary process, which can spawn child processes; anything that can be managed by a single blocking process (one that waits until its work is complete) can be packaged and run in a container.

A containerized application, while remaining wholly within the container, does not run fully on the operating system (OS) on which the container is running, nor does the container hold an entire operating system for the application to run on. Instead, an application in a container runs natively on the kernel of the host system, while the container provides only the elements needed to run it and make it accessible to the required connections, services, and interfaces — a runtime environment (including file system), the code, libraries, environment variables, and configuration files.

Because it packages only these elements and executes the application natively, a container is both very efficient (running as a discrete, manageable operating system process that takes no more memory than any other executable) and fully portable (remaining completely isolated from the host environment by default, accessing local files and ports only if configured to do so), while at the same time providing standard, well-understood application configuration, behavior, and access.

The isolation of the application from the host environment is a very important element of containerization, with many significant implications. Perhaps most important of these is the fact that unless specifically configured to do so, a containerized application does not write persistent data, because whatever it writes inside the container is lost when the container is removed and replaced by a new container. Because data persistence is usually a requirement for applications, arranging for data to be stored outside of the container and made available to other and future containers is an important aspect of containerized application deployment.

The Container Image

A container image is the executable package, while a container is a runtime instance of an image — that is, what the image becomes in memory when actually executed. In this sense an image and a container are like any other software that exists in executable form; the image is the executable and the container is the running software that results from executing the image.

A Docker image is defined in a Dockerfile, which begins with a base image providing the runtime environment for whatever is to be executed in the container. For example, InterSystems uses the Ubuntu operating system as a base for its InterSystems IRIS images, so the InterSystems IRIS instance in a container created from an InterSystems image is running in an Ubuntu environment. Next come specifications for everything needed to prepare for execution of the application — for example, copying or downloading files, setting environment variables, and installing the application. The final step is to define the launch of the application.

The image is created by issuing a docker build command specifying the Dockerfile’s location. The resulting image is placed in the image registry of the local host, from which it can be copied to other image registries.

Running a Container

To execute a container image and create the container — that is, the image instance in memory and the kernel process that runs it — you must execute three separate Docker commands, as follows:

  1. docker pull — Downloads the image from the registry.

  2. docker create — Defines the container instance and its parameters.

  3. docker start — Starts (launches) the container.

For convenience, however, the docker run command combines these commands, which it executes in sequence, and is the typical means of creating and starting a container.

The docker run command has a number of options, and it is important to remember that the command that creates the container instance defines its characteristics for its operational lifetime; while a running container can be stopped and then restarted (not a typical practice in production environments), the aspects of its execution determined by the docker run command cannot be changed. For instance, a storage location can be mounted as a volume within the container with an option in the docker run command (for example, --volume /home/storage:/storage3), but the volumes mounted in this fashion in the command are fixed for that instantiation of the image; they cannot be modified or added to.

When a containerized application is modified — for example, it is upgraded, or components are added — the existing container is removed, and a new container is created and started by instantiating a different image with the docker run command. The new container itself has no association with the previous container, but if the command creating and starting it publishes the same ports, connects to the same network, and mounts the same external storage locations, it effectively replaces the previous container. (For information about upgrading InterSystems IRIS containers, see Upgrading InterSystems IRIS Containers.)

Caution:

A container’s host machine must satisfy the minimum supported CPU requirement for InterSystems products. If the host machine does not meet this requirement, the InterSystems IRIS container does not start.

Important:

InterSystems does not support mounting NFS locations as external volumes in InterSystems IRIS containers, and iris-main issues a warning when you attempt to do so.

Note:

As with other UNIX® and Linux commands, options to docker commands such as docker run can be specified in their long forms, in which case they are preceded by two hyphens, or their short form, preceded by one. In this document, the long forms are used throughout for clarity, for example --volume rather than -v.

FeedbackOpens in a new tab