Home
Unchained
Engineering Blog

Building Multiarch Images with Chainguard Images

Adrian Mouat, Staff DevRel Engineer

Until relatively recently, if you were running containers in the cloud, it was a pretty safe bet that they were running on the x86-64 architecture. In recent years, this has been rapidly changing, with ARM64 architectures starting to command a significant percentage. This has been driven primarily by the energy (and hence cost) savings typically associated with ARM processors. Cloud providers have also been developing custom chips such as Google Axion and AWS Graviton, further pushing uptake. In the future, we may also see RISC-V based chips making inroads.


Are you burning money by using the wrong architecture?

To put this more pithily: if your container image builds are x86-64 only at the minute, you might want to rethink. This article will take you through the different options for turning a single-arch Docker build into a multi-arch one, so you can take advantage of potential cost savings with relatively few changes.


There are 3 different options that we will look at in turn:


  1. Using emulation (QEMU runners)

  2. Using cross-compilation

  3. Using native runners


To highlight the differences between these options, we'll be referring to a simple example project.


Example Repo


The GitHub repo images-bite-back-talk contains several Dockerfiles and GitHub Actions that we will refer to in this article. Please clone the repo if you'd like to follow along with the examples.


The base Dockerfile looks like:


# syntax=docker/dockerfile:1
FROM cgr.dev/chainguard/go:latest-dev@sha256:51fcd6edf090b06323262c56ec2957a473db04696f43c3dfb318bf832e618b88 AS builder

WORKDIR /work

COPY go.mod /work/
COPY cmd /work/cmd
COPY internal /work/internal

RUN CGO_ENABLED=0 go build -o hello ./cmd/server

FROM cgr.dev/chainguard/static:latest@sha256:1c785f2145250a80d2d71d2b026276f3358ef3543448500c72206d37ec4ece37
COPY --from=builder /work/hello /hello

ENTRYPOINT ["/hello"]

This is a multistage Dockerfile that builds a simple Go application into a static binary and copies the result into the Chainguard static base image. This approach is a best practice as it results in a minimal production image. You can read more about this approach in our Getting Started with Distroless guide. Chainguard Images are minimal, guarded images with builds for both x86-64 and ARM64, making them a great choice for multiarch builds.


QEMU Runners


Let's see how we can build this image for both ARM64 and x86-64 architectures, starting with using QEMU. QEMU is an amazing open source project that is capable of emulating other platforms and architectures. If you're not using Docker Desktop, you may first need to register binfmt handlers for non-native builds. This can be done with:


docker run --privileged --rm tonistiigi/binfmt --install all

Now, you should be able to build the above Dockerfile for both ARM64 and x86-64 with:


docker build --platform linux/amd64,linux/arm64 .

Note that you will need to be using the containerd image store to save multiplatform images locally, but either way, you can still build and push to a registry.


If you ran that build, you probably noticed something. On my M1 Mac laptop, the native ARM64 build takes roughly 8 seconds, but the emulated x86-64 build takes 34 seconds, or over 4 times as long. And this is for a trivial application with less than 30 lines of code. QEMU is a fantastically impressive project, but it's often not suitable for recurring tasks, due to the high cost of emulation. Running QEMU in CI/CD is effectively burning money.


So if we can't use QEMU, what can we do? Let's take a look at cross compilation.


Cross Compilation


Here, the idea is that we ask our compiler to build the binary for a different architecture than the host. We can then copy the binary onto a base image for the target architecture, with no need for emulation.


This is a little more complicated. For our example application we have cross.Dockerfile which looks like:


# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM cgr.dev/chainguard/go:latest-dev@sha256:bd8bbbb8270f2bda5ab1f044dcf1f38016362f3737561fea90ed39f412e1f4cc AS builder
ARG TARGETOS
ARG TARGETARCH
WORKDIR /work

COPY go.mod /work/
COPY cmd /work/cmd
COPY internal /work/internal

RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} CGO_ENABLED=0 go build -o hello ./cmd/server

FROM cgr.dev/chainguard/static:latest@sha256:1c785f2145250a80d2d71d2b026276f3358ef3543448500c72206d37ec4ece37
COPY --from=builder /work/hello /hello 

ENTRYPOINT ["/hello"]

This build takes advantage of some variables defined in Docker:


  • BUILDPLATFORM which is bound to the native hardware of the build machine, regardless of QEMU.

  • TARGETOS and TARGETARCH which are bound to the platform we are building for (i.e. the --platform argument to docker build).


By using --platform=$BUILDPLATFORM in the first FROM statement, the builder stage will always run on the native platform, regardless of the target platform. We then use the TARGETOS and TARGETARCH variables in the go build step to build a binary for the target architecture, which can differ from the build platform. Finally, the second part of the build simply copies the binary from build stage into the appropriate architecture-specific image. Note that no emulation will occur here as there are no RUN statements in the second part of the build.


The best thing about this is the speed – on my machine there wasn't a noticeable difference between building for different platforms. This indicates that cross-compiling can be more cost-efficient than native builds, depending on the cost of the platform. You can significantly save costs by building x86-64 images on ARM64 platforms with cross-compilation.


There is – of course – a gotcha. The above code is a trivial test program with few dependencies. In a real-life situation you are likely to require more dependencies which will need to be obtained for the target platform. For example, imagine our application needs the zlib library (this is a C library, but could be used via a foreign function interface or indirectly). We could add the following to the build stage:


RUN apk add zlib-dev

But that will install the library for the build platform, not the target platform. It turns out that this is a solvable problem – we can ask apk for the correct target platform and link it correctly later. This can get a little hairy, but thankfully some helper utilities exist to ease this process in the form of the xx project. xx provides a set of wrapper tools that can be copied into the build stage and used to call package managers and compilers with the correct flags to handle the target platform. An example of this can be found in the cross-xx.Dockerfile:


# syntax=docker/dockerfile:1
# Load cross-platform helper functions
FROM --platform=$BUILDPLATFORM tonistiigi/xx AS xx

FROM --platform=$BUILDPLATFORM cgr.dev/chainguard/go:latest-dev@sha256:bd8bbbb8270f2bda5ab1f044dcf1f38016362f3737561fea90ed39f412e1f4cc AS builder
COPY --from=xx / /
RUN xx-apk add --no-cache zlib-dev
ARG TARGETOS
ARG TARGETARCH
WORKDIR /work

COPY go.mod /work/
COPY cmd /work/cmd
COPY internal /work/internal

RUN CGO_ENABLED=0 xx-go build -o hello ./cmd/server

FROM cgr.dev/chainguard/static:latest@sha256:1c785f2145250a80d2d71d2b026276f3358ef3543448500c72206d37ec4ece37
COPY --from=builder /work/hello /hello 

ENTRYPOINT ["/hello"]

The first FROM line loads the tonistiigi/xx image which contains the xx tools. These are then copied into the build image by the line:


COPY --from=xx / /

In the next line we use the xx-apk tool:


RUN xx-apk add --no-cache zlib-dev

This takes care of the magic needed to install zlib-dev for the target architecture.


Note that the Go build line has also changed to use xx-go:


RUN CGO_ENABLED=0 xx-go build -o hello ./cmd/server

This example still has CGO_ENABLED=0 as we're not actually using the zlib-dev library, but hopefully this still illustrates the usage.


Setting up cross compilation takes a little bit of time and may not be practical (or possible) for all stacks. In these cases we are left with what's perhaps the most obvious solution: run the build on a machine of the required architecture.


Native Runners


The best thing about using native runners is that the docker build step is simple again – it normally means running the same base Dockerfile on different architectures. The difficulties are in obtaining access to different build architectures, and sometimes in building a single multi-arch image.


In the ideal case, both these problems are straightforward. You can use Docker Build Cloud to set up docker build instances backed by builders of the correct architecture in a few clicks (depot.dev also offers a similar service). You can also set-up your own build instances, which will be necessary if you need to build for platforms other than x86-64 or ARM64.


If you're running in a CI/CD platform such as GitHub Actions, you can call out to a remote service to do the build, but you may prefer to use the runners within the CI/CD platform. In this case you generally won't be able to use Docker build instances and will need to work within the architecture of the CI/CD platform. The build-and-push-runners.yaml example shows how to do a multi-platform build with a GitHub Action:


name: Multiplatform Build with Runners

jobs:
  armbuild:
    runs-on: [linux-arm-for-testing]
    outputs:
      digest: ${{ steps.build.outputs.digest }}
    steps:
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@c47758b77c9736f4b2ef4073d4d51994fabfe349 # v3
      -
        name: Login to Docker Hub
        uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3
        with:
          username: ${{ vars.DOCKERHUB_USER }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      - 
        name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81 # v5
        with:
          images: |
            amouat/images-bite-back-runner
          tags: |
            type=raw,arm-${{ github.RUN_ID }}
          labels: |
            org.opencontainers.image.description=Images Bite Back Demo Arm Runner
      -
        id: build
        name: Build and push
        uses: docker/build-push-action@4f58ea79222b3b9dc2c8bbdd6debcef730109a75 # v6
        with:
          file: Dockerfile
          platforms: linux/arm64
          push: true
          sbom: true
          provenance: mode=max
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

  x86build:
    runs-on: [ubuntu-latest-2-cores-testing]
    outputs:
      digest: ${{ steps.build.outputs.digest }}
    steps:
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@c47758b77c9736f4b2ef4073d4d51994fabfe349 # v3
      -
        name: Login to Docker Hub
        uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3
        with:
          username: ${{ vars.DOCKERHUB_USER }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      - 
        name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81 # v5
        with:
          images: |
            amouat/images-bite-back-runner
          tags: |
            type=raw,x86-${{ github.RUN_ID }}
          labels: |
            org.opencontainers.image.description=Images Bite Back Demo X86 Runner
      -
        id: build
        name: Build and push
        uses: docker/build-push-action@4f58ea79222b3b9dc2c8bbdd6debcef730109a75 # v6
        with:
          file: Dockerfile
          platforms: linux/amd64
          push: true
          sbom: true
          provenance: mode=max
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

  manifest:
    needs: [x86build, armbuild]
    runs-on: ubuntu-latest
    steps:
        name: Login to Docker Hub
        uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3
        with:
          username: ${{ vars.DOCKERHUB_USER }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      - 
        name: Install crane
        uses: imjasonh/setup-crane@31b88efe9de28ae0ffa220711af4b60be9435f6e # v0.4
      -
        name: Create and Push Multi-Platform Manifest
        run: |
          X86DIGEST=$(crane digest --platform linux/amd64 amouat/images-bite-back-runner@${{ needs.x86build.outputs.digest }})
          ARMDIGEST=$(crane digest --platform linux/arm64 amouat/images-bite-back-runner@${{ needs.armbuild.outputs.digest }})
          docker manifest create amouat/images-bite-back-runner:multi-${{ github.RUN_ID }} \
            amouat/images-bite-back-runner@$X86DIGEST \
            amouat/images-bite-back-runner@$ARMDIGEST
          MULTIDIGEST=$(docker manifest push amouat/images-bite-back-runner:multi-${{ github.RUN_ID }})

The full example in the repo also includes code to sign images and add build attestations. This has been elided here, but is strongly recommended for any production build.


The action is split up into 3 stages: x86build, armbuild, and manifest.


The x86build and armbuild stages are very similar (and could take advantage of reusable workflows to reduce duplication). They both use the official Docker actions to set up buildx and run the build. The metadata step takes care of image and tag naming. The only differences between the stages are the runners used (linux-arm-for-testing and ubuntu-latest-2-cores-testing), the platform specified in the build stage, and the naming for the image. The output of both of these stages is an image digest for the newly built image. As the stages are independent, they will execute in parallel.


The manifest stage is dependent on the previous stages and will only start when they are completed. This stage takes the digests of the built images as input. These digests both point to a manifest list, which is a platform independent holder for an image that can be automatically resolved to the correct platform for a client. In this case, we want to build a new manifest list that points to the platform specific images. To resolve the digests to the platform specific images we've used the crane tool. We then use docker manifest create to build a new manifest list and push it to the registry. Anyone pulling from this manifest list will then automatically get the correct image for their platform.


So, with a bit more work, we now have a build using native runners on GitHub Actions. While slower than cross-compilation due to the overhead of the extra stages, it is still vastly faster than using emulation, even for our trivial application (applications with longer builds will see even greater gains).


Conclusion


This article has hopefully convinced you that you need to be thinking about multiarchitecture builds and that you should not be using emulation for anything outside of experimentation. The sweet spot for many orgs will be using all ARM runners and cross-compiling for x86-64, which has the potential for considerable cost savings. When that's not possible, native builders should be utilized.


Please let me know if you try this out and you have any questions. Or better yet, let me know if it's saved you money and/or time! And if you are interested in learning more about Chainguard Images, please reach out.

Share

Ready to Lock Down Your Supply Chain?

Talk to our customer obsessed, community-driven team.

Get Started