A Short Introduction to Containers

How to build and run container images for Network Security

Containers are heavily used in my Network Security class as well as in industry for cloud workloads. This document serves as an incomplete but hopefully helpful quickstart for Network Security students that need to create and run containers for class labs and assignments.


“Containers” refer to OS-level virtualization, where a single OS kernel provides service to multiple userspaces with varying degrees of isolation between each other. Isolation takes the form of namespaces, where individual containers can have, e.g., a file system, PID space, users, and network interfaces that are separate from both the host system as well as other containers. However, this isolation is flexible and can be relaxed in a coarse fashion if desired.

Containers have several important advantages as a virtualization and software delivery method. First, container images package software along with all of the (local) dependencies required for that software to execute, such as shared libraries and configuration files. Container run-times can also present a standard environment to containers in terms of network devices and topology or file system mounts. All of this means that containers greatly simplify application deployment, providing crucial agility especially when horizontal scalability across a large number of physical hosts is required.

Containers are also significantly more efficient when it comes to time and space overhead compared to traditional hypervisor-based virtualization. This directly translates to higher density and reduced deployment costs, important factors at scale.

However, containers are not without their drawbacks. Notably, malicious or compromised containers are presented with a large attack surface consisting of the entire OS kernel. While there are several technologies available to sandbox untrusted containers such as sVirt and SECCOMP-BPF, this surface is nevertheless orders of magnitude larger than the extremely restricted hypercall interface presented by a hardware-backed hypervisor.

There are multiple implementations available, although Docker and the OCI stack are extremely popular and well-supported on Linux. Docker is preferred for Network Security students unless using a distribution that has supplanted Docker with OCI tools. In that case, the examples below should use podman instead of docker, which is mostly CLI-compatible.

Building Containers

Containers are built from a declarative specification known as a Dockerfile. Usually, a Dockerfile will specify a base image to build upon, files to add to the image, commands to run in the context of the image, and a default command or entrypoint. For example, assuming that we want to package a python program as a container, we could specify the following.

# Build this image on the base image "python" published in Docker Hub
# and tagged as version "3".
from python:3

# Copy our source code from ./src on the host to /app on the image.
# Note the rules about file vs. directory copies in the reference
# documentation -- the trailing slash in /app/ is important!
copy ./src /app/

# Set the default command to run.  If additional arguments are given
# when running the resulting image, those *completely replace the
# default command*.
cmd ["/app/main.py"]

# Or, set a default entrypoint.  If additional arguments are given
# when running the resulting image, those are provided as arguments
# to the entrypoint.  Replacing the entrypoint requires using the
# --entrypoint option to "docker run".
entrypoint ["/app/main.py"]

Then, assuming that we are at the top-level directory of the program, we can build an image using the following command.

$ docker build \        # Build an image
    -t my_cool_image \  # Tag the image
    -f Dockerfile \     # Optionally choose a Dockerfile
    .                   # Set the build context

A successful build would place my_cool_image in the local image repository.

Loading and Saving Containers

While production containers are usually published in a container repository, it is sometimes handy to save images to files and load them on different systems manually. To do so, you can do the following.

# Create an image archive
host1$ docker save -o my_cool_image.img my_cool_image
# Copy my_cool_image.img from host1 to host2, and then load it
host2$ docker load -i ${image_name}.img

Running Containers

Given that a container image is available (whether in a known online repository or locally), you can then run one or more container instances based on that image. This is an important distinction to keep in mind: one image can serve as the base for multiple run-time instances, each of which are distinct and separate from each other. This is part of why containers are so useful for horizontal scaling. That is, if you need to scale up, simply launch more instances and ensure they are fronted by a load balancer! Need to downsize? Kill some of your existing instances.

There are a huge number of options available to control the level of isolation desired, set CPU and memory quotas, expose network ports, mount filesystems or container volumes into an instance, set environment variables, etc. Continuing the running example, the following illustrates how you might use some common options.

# Create and run a container instance
$ docker run \
    # Name the instance my_cool_instance
    --name my_cool_instance \
    # Attach stdin/stdout/stderr to the container
    -i \
    # Attach a TTY to the container
    -t \
    # Proxy traffic to host port 1024/tcp to instance port 1025/tcp
    -p 1024:1025/tcp \
    # Proxy traffic to host port 1026/udp to instance port 1027/udp
    -p 1026:1027/udp \
    # Or, expose all host network interfaces to the instance
    --network=host \
    # Set the environment variable RUST_LOG to info in the instance
    -e RUST_LOG=info \
    # Mount host directory /my_cool_data to container path /data
    -v /my_cool_data:/data \
    # Base the instance on the image my_cool_image
    my_cool_image \
    # Pass some arguments to the container entrypoint
    arg1 arg2 arg3

If successful, you should be able to see your new instance in the list of running containers by running docker ps in another terminal. Here’s a completely unrelated example.

$ docker ps
CONTAINER ID  IMAGE                COMMAND            CREATED     STATUS         PORTS                 NAMES
ab0c1f1814af  build-fedora:latest  /usr/sbin/sshd -D  7 days ago  Up 7 days ago>22/tcp  build-fedora

If you need to destroy a container, you can stop it and remove it by referring to its unique name or hexadecimal container ID. For instance, to remove my_cool_instance, you could do the following.

# Stop an instance; wait one second for processes to terminate before
# forcibly killing them
$ docker stop -t 1 my_cool_instance
# Remove the container from the system
$ docker rm my_cool_instance


So there you have it. There is a lot more one could say about containers, their implementation, and orchestrating container-based workloads. But, hopefully this will point you in the right direction as a Network Security student when it comes to the basics of building and running containers.