• The Blueprint
  • Posts
  • How to cut docker images by 80%: Docker theory for busy engineers

How to cut docker images by 80%: Docker theory for busy engineers

GM Busy Engineers 🫡

Docker was brought to market in 2013 and it has run more than a billion workflows in production over 1000’s of companies. Even with all this use, my guesstimate is that only 20% of developers understand how it really works.

If you don’t know what this is, read on!

The more senior you are as an engineer, the more you need to understand how containerization works to use it effectively inside your org. This article is not a docker explainer, but rather more on container theory. Let’s get into it!

What is Docker?

Docker is a company that develops containerization technology. Now, while the concept of containerization is not novel, Docker innovated on developer experience and made it more accessible to the average developer.

The problem is people develop applications on their own machines, but it doesn't necessarily run on someone else's machine because of different operating systems, different versions of dependencies, missing libraries, etc.

When you dockerize or containerize an application, the final image includes the application code, dependencies, libraries, and the exact configuration needed to run it. It runs the same way on your, your coworker’s and your cloud machine regardless.

The two main concepts to remember are Images and Containers:

  • Docker image: A standalone package that includes everything needed to run an application - the code, runtime, system tools and libraries.

  • Docker container: A running instance of a Docker image. Containers are isolated from each other and the host system.

What is inside a Docker image?

Containerization technology today are compliant with OCI (Open Container Initiative) which defines how containers should work / run. This means Docker, Podman and other similar tools all work the same way at their core.

A docker image has two main parts, the file system and it’s manifest.

Filesystem

Most engineers don’t know that a Docker image contains a fully working unix file system. This includes essential directories like /bin (binaries for cli commands), /etc (configs), /lib (system libraries), etc. All of this is dependent on the distirbution of linux defined in your base image, for example, ubuntu, alpine, debian. The following is an example docker image I exported to show you the underlying FS stored in the image.

A Docker image filesystem

Interestingly, this filesystem is not stored in the format shown above but as gzip-ed layers of changes (almost like the version control git). Every command in a dockerfile creates a new layer which contains new changes from the last layer.

Every command in a dockerfile creates a new layer

This approach of storing layers creates efficiency as most applications have identical base images which means, these layers can be cached, improving build times and reduces storage requirements.

Manifest

An image’s manifest simply defines what layers the filesystem is made up of, where to find the config file (explained in next section) and other metadata for the image.

An image manifest. Source: Dan Lorenc’s Medium

How is a docker image run?

Now that we understand that an image contains multiple layers of changes to the filesystem, we can finally talk about how this is run.

OverlayFS

When a container is run (using, for example, docker run), docker creates a filesystem using OverlayFS which overlays all the individual layers into one unified (merged) view of the complete filesystem.

Visualization of an Overlay filesystem

Now that the container runner has a complete filesystem, all that it needs to start a container is knowing what environment variables to set, command and arguments to run with…

Config file

Aaaand thats where the config file comes in handy.

{ "ociVersion": "1.0.1-dev", "process": { "terminal": false, "user": { "uid": 0, "gid": 0 }, "args": ["python", "hello.py"], "env": [ "LANG=C.UTF-8", "PYTHON_VERSION=3.9.20" ], "cwd": "/app" }, "root": { "path": "/mnt/fuse/phw", "readonly": true }, "mounts": [ { "destination": "/proc", "type": "proc", "source": "proc" }, { "destination": "/dev", "type": "tmpfs", "source": "tmpfs", "options": ["nosuid", "strictatime", "mode=755", "size=65536k"] }, { "destination": "/sys", "type": "sysfs", "source": "sysfs", "options": ["nosuid", "noexec", "nodev", "ro"] } ], "linux": { "namespaces": [ { "type": "pid" }, { "type": "ipc" }, { "type": "uts" }, { "type": "mount" } ] } }

This is OCI compliant and shows an actual container’s config. Note how it defines contextual run-time information like the current working dir, env and args all of which corroborate to actual docker commands (DIR, ENV, CMD).

Two other things I will gloss over are cgroups and namespaces. Cgroups are used to limit the resources that a container can use and namespaces (as in the config file above) are used to isolate the container’s resources with the system’s.

Here is a great explainer if this piqued your interest.

The fun part: How to cut an image size by 80%?

Please do try this at home (and maybe at work if you want things to get spicy). This works great for side projects and in some cases, in production.

If you search this up on the internet, you’ll be met with a ton of mundane articles all preaching the same “use less layers“, “use multi-step builds“ and “add .dockerignore“, which is… boring 🥱. We only do crazy experiments here at theblueprint.dev.

Intuition

A docker image contains the full filesystem, meaning there are all kinds of utils, libs and misc. files that don’t end up being needed for the particular application we run.

What if we can track exactly what files the docker container actually use from our filesystem and then, delete everything else. Sounds crazy, right?

Images cut by 80%+ (normal vs -minimal images)

Dynamic analysis

This is called dynamic analysis and is done in other areas of computer science. Warning for the brave souls trying this in production: Make sure to run your container under various conditions to catch all possible file accesses. Missing a single file could cause runtime crashes if that code path is hit later.

Implementation

Since we want to test all possible file accesses, I recommend writing something similar to a test suite with high coverage to test different scenarios.

The implementation is simple, at its core, its using the linux utility strace and logs file accesses. With some basic string processing, we can extract all file paths and from there we can simply delete all these file paths from all the layers in the docker image. The output below shows an example output.

Example output of all accessed files

Final Notes

The point of this article was to help shine light on the internals of containers and how they work. The experiment I have shown above is something I am actively working on as a product. Building a tool that trims a hello world python app is simple, but software has tonnes of edge cases and building an extensible, production-ready trimmer is HARD.

If you face the problem of bloated docker images racking up network costs at work, feel free to reach out on linkedin (please include a connection message).