Sigstore policy-controller 101
If software supply chain security is important to you, you probably already know about Sigstore’s excellent suite of software supply chain security tools. For example, perhaps you already use cosign to sign your SBOMs, fulcio to generate and inspect certificates or the transparency log rekor as your source of truth for signatures and other artifact metadata. (If not, head down to Chainguard Academy’s Sigstore course to start securing your container workloads stat!)
On their own, incorporating these tools into your container workflows represents a big step forward in your software supply chain security practices. But these tools take on exponential powers when paired with Sigstore’s policy-controller, a purpose-built Kubernetes admission controller that is designed to integrate with Cosign and the Sigstore standard. While there are many admission controllers to choose from, Sigstore’s policy-controller is unique in that it offers a thorough and flexible implementation for validating signatures and attestations in Kubernetes environments.
To do this, the Sigstore policy-controller makes the Cosign CLI capabilities available in Kubernetes clusters, enabling you to configure policies in a declarative manner that define which containers are allowed and not allowed to run.
For example, you may want to:
Only allow containers that come from registries that are on your allow-list
Only allow containers that have been signed by your CI/CD systems
Do not allow containers that have any high severity vulnerabilities
Do not allow containers that contain a specific package
In this post, we’ll walk you through the policy-controller set up and show you how to define policies by creating custom resources. In the first half, you will create a kind cluster, install the policy controller, and create a test image. In the second half, you will create policies that define an allow list for trusted registries and verify that images have been signed by a trusted entity. By the end of this article, you will be ready to tackle more complex policy configurations which we will write about in future posts.
Getting started
For these examples, we’re going to spin up a local kind cluster so that you can complete all the steps on your laptop. If you want to follow these instructions in your cluster, a registry, or with Sigstore, you may need to take some additional steps due to auth, registry credentials, signing keys, etc. This post will assume you are working on a local setup, which will enable you to avoid these steps.
Prerequisites
Familiarity with signing and creating attestations with Cosign. If you need an introduction to Cosign, check out our tutorial Chainguard Academy.
Docker installed
kubectl installed
ko installed
kind installed
Cosign installed
Step 1 — Create a kind cluster
We are going to install a kind cluster using Sigstore Scaffolding, let’s gooo!!!
kind create cluster --name policy-controller-demo
After this completes successfully, your cluster should be ready. Let’s check it out:
kubectl get ns
An example output from my machine is like so:
vaikas@vaikas-MBP policy-controller % kubectl get ns
NAME STATUS AGE
default Active 11s
kube-node-lease Active 12s
kube-public Active 12s
kube-system Active 12s
local-path-storage Active 8s
Step 2 — Install Policy Controller
Now we’re going to install the policy-controller onto the cluster we created above.
curl -Lo /tmp/policy-controller.yaml https://github.com/sigstore/policy-controller/releases/download/v0.7.0/policy-controller-v0.7.0.yaml
kubectl apply -f /tmp/policy-controller.yaml
Step 3 — Create a test image
Let’s create a container image that we can play with. First we are going to use ttl.sh for our temporary images, so let’s configure ko to use that. (Note: images created in ttl.sh are available by default for 24 hours, which is more than enough time for following this demo.)
export KO_DOCKER_REPO=ttl.sh/policy-controller-demo-$(whoami)
Next, build a container for the super awesome sauce hello world program to go in:
pushd $(mktemp -d)
go mod init example.com/demo
cat <<EOF > main.go
package main
import "fmt"
func main() {
fmt.Println("hello world policy-controller")
}
EOF
export demoimage=`ko publish -B example.com/demo`
echo Created image $demoimage
popd
At the end of this, you should see something like this:
2023/03/08 17:12:09 pushed blob: sha256:4a5ee0fcd8d009284fcc50837518ef3b50df57a2ef7a941340c94b72e7b8087c
2023/03/08 17:12:10 ttl.sh/policy-controller-demo-vaikas/demo:latest: digest: sha256:d4b4d0520799888721e7b0dfb1110ffb6b3684353f45aa1e87fb6b027643f7d7 size: 1072
2023/03/08 17:12:10 Published ttl.sh/policy-controller-demo-vaikas/demo@sha256:d4b4d0520799888721e7b0dfb1110ffb6b3684353f45aa1e87fb6b027643f7d7
Created image ttl.sh/policy-controller-demo-vaikas/demo@sha256:d4b4d0520799888721e7b0dfb1110ffb6b3684353f45aa1e87fb6b027643f7d7
Now, try running the container with the following command:
docker run $demoimage
Your hello world program should run, printing `hello world policy-controller` at the bottom of your output. For example, your output should be similar to this:
Unable to find image 'ttl.sh/policy-controller-demo-vaikas/demo@sha256:d4b4d0520799888721e7b0dfb1110ffb6b3684353f45aa1e87fb6b027643f7d7' locally
ttl.sh/policy-controller-demo-vaikas/demo@sha256:d4b4d0520799888721e7b0dfb1110ffb6b3684353f45aa1e87fb6b027643f7d7: Pulling from policy-controller-demo-vaikas/demo
4a5ee0fcd8d0: Pull complete
250c06f7c38e: Pull complete
93ac512b9ff1: Pull complete
Digest: sha256:d4b4d0520799888721e7b0dfb1110ffb6b3684353f45aa1e87fb6b027643f7d7
Status: Downloaded newer image for ttl.sh/policy-controller-demo-vaikas/demo@sha256:d4b4d0520799888721e7b0dfb1110ffb6b3684353f45aa1e87fb6b027643f7d7
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
hello world policy-controller
And we should be able to run it in our cluster as well:
kubectl run testimage --image=$demoimage
You should get something similar to the following output once it completes:
vaikas@vaikas-MBP policy-controller % kubectl logs testimage
hello world policy-controller
Now that you’ve successfully run the demo, you can clean up the pod by deleting the test image:
kubectl delete pods testimage
You should now have a cluster running, the policy-controller installed, and a test image available for testing out policies. In the next section, we’ll begin creating and testing policies using this test image.
Overview of ClusterImagePolicy CRD
The policies used by the policy-controller are defined by creating Custom Resources on your cluster that define what is and what is not allowed on your cluster. Let’s take a quick look at what they look like and go through an example:
apiVersion: policy.sigstore.dev/v1beta1
kind: ClusterImagePolicy
metadata:
name: policy-controller-demo
spec:
images:
- glob: ttl.sh/policy-controller-demo**
authorities:
- name: allow-list
static:
action: pass
The top five lines are the standard Kubernetes resource definition for any ClusterImagePolicy (CIP), though you’ll have different names here if you have more than one 😂.
In the `image` line, you’ll see a list of globs (pattern matches) that an image must match for this policy to be evaluated against.
Once the image matches, it must then satisfy at least one of the authorities listed in the `authorities` list below.
In this example, we will set a policy to “Only allow containers that come from registries that are on my allow-list”
Let’s try that out by creating that ClusterImagePolicy on the cluster:
cat <<EOF > /tmp/allowlist.yaml
apiVersion: policy.sigstore.dev/v1beta1
kind: ClusterImagePolicy
metadata:
name: policy-controller-demo
spec:
images:
- glob: ttl.sh/policy-controller-demo**
authorities:
- name: allow-list
static:
action: pass
EOF
kubectl create -f /tmp/allowlist.yaml
You should receive output that confirms the CIP was successfully created. You can also query the Kubernetes API to see that it’s on the cluster and ready to enforce!
vaikas@vaikas-MBP policy-controller % kubectl get cip
NAME AGE
policy-controller-demo 8s
Let’s create a namespace for our demonstration and then create a label to say that we want to enforce policies in this namespace. By default, the policy-controller is installed with ‘opt-in’ settings, meaning that you have to specifically label namespaces where you the policies to be applied. This makes it safer to roll out, but you can change this behavior to be ‘opt-out’.
To create a namespace and label it, run the following commands:
kubectl create ns pc-demo
kubectl label namespaces pc-demo policy.sigstore.dev/include=true
Now you can run the test image against that policy:
kubectl -n pc-demo run testimage --image=$demoimage
The image should run because our policy allows it. Your output should confirm the image runs by returning the text `hello world policy-controller`.
vaikas@vaikas-MBP policy-controller % kubectl -n pc-demo logs testimage
hello world policy-controller
Now that you’ve confirmed the policy works, you can clean up your pod:
kubectl -n pc-demo delete pods testimage
Let’s now investigate what happens if we run an image from another repo that is not covered by our policy. Enter the following command to run busybox:
kubectl -n pc-demo run testimagenope --image=busybox
This will result in tears (of joy) because there are no matching policies and therefore it’s not allowed to run. You should receive output that confirms that the image is blocked.
vaikas@vaikas-MBP policy-controller % kubectl -n pc-demo run testimagenope --image=busybox
Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: no matching policies: spec.containers[0].image
index.docker.io/library/busybox@sha256:c118f538365369207c12e5794c3cbfb7b042d950af590ae6c287ede74f29b7d4
Now let’s try running this image in a different namespace. Note that we previously ran the image in the namespace pc-demo with policy enforcement on. In this example, you can run the test image in your default namespace with the following command:
kubectl run testimageyup --image=busybox
After submitting this command, you should receive confirmation that the image successfully runs. Why is that?
The image successfully ran because you have not applied the policies to the default namespace where you just ran the image. ClusterImagePolicy is a cluster-scoped resource, meaning that while you define policies per cluster, you control enforcement of policies at the namespace level. When you labeled the namespace pc-demo with the `policy.sigstore.dev/include=true` label (several commands above), you turned on policy enforcement for this namespace. However, the default namespace does not have the enforcement turned on, so the busybox image was allowed to run there.
The policy-controller is set up to work this way so that you don’t accidentally prevent all containers from running that you haven’t yet set a policy for. By enabling the option to turn policies on and off at at a namespace level, you can carefully begin testing and incorporating policies in a cautious manner.
In future posts, we will discuss different ways of configuring policies at a cluster level, but for now there are two things to keep in mind:
A ClusterImagePolicy applies to whole cluster (each namespace)
The label `policy.sigstore.dev/include` must be set to true for CIPs to be enforced
You should now know how to turn policies on at a namespace level. In the next section, we will discuss how enforcement works when there are multiple policies.
Multiple policies
What happens if an image matches multiple CIPs? It must satisfy all of them! You can also create policies with logical structures that require either all or at least one of the authorities listed in the policy. These features enable you to create powerful, customized policies for specific use cases.
Let’s take a look at how this would work by adding an additional policy to the policy we created above: “Only allow containers that have been signed by my CI/CD systems”. Since I can’t write this example to cover your personal CI/CD system, we’ll need to pretend a little to demonstrate the concept. In this example, we’ll just use ‘keys’ for signing (though note, policy-controller can also handle keyless signing).
Create a keypair (you can use `empty` for the password to make things easier for testing if you wish).
cosign generate-key-pair
Make the public key available to `policy-controller` so it can verify things have been signed by the right folks or entities.
kubectl -n cosign-system create secret generic cicd-secret --from-file=secret=./cosign.pub
Next, create a CIP that says that all images must be signed by that keypair. In the codeblock below, we are using the `cat` command to write this policy into the file `/tmp/signed-by-cicd.yaml` and using `kubectl create` to make the policy live. (Remember, you would have to adjust the policy to account for your CI/CD system.)
cat <<EOF > /tmp/signed-by-cicd.yaml
apiVersion: policy.sigstore.dev/v1alpha1
kind: ClusterImagePolicy
metadata:
name: signed-by-cicd
spec:
images:
- glob: ttl.sh/policy-controller-demo**
authorities:
- key:
secretRef:
name: cicd-secret
ctlog:
url: https://rekor.sigstore.dev
EOF
kubectl create -f /tmp/signed-by-cicd.yaml
You can now confirm the addition of this policy by running the following command:
kubectl get cip
Your output should show that you have two policies:
NAME AGE
policy-controller-demo 3m11s
signed-by-cicd 3s
Now try to run the test image again to see the results of your additional policy:
kubectl -n pc-demo run testimage --image=$demoimage
Your test image should fail. Though it was previously accepted by the first policy, it does not meet the criteria of the second policy that you just added. You should receive a message similar to his one:
Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: failed policy: signed-by-cicd: spec.containers[0].image
ttl.sh/policy-controller-demo-vaikas/demo@sha256:d4b4d0520799888721e7b0dfb1110ffb6b3684353f45aa1e87fb6b027643f7d7 signature key validation failed for authority authority-0 for ttl.sh/policy-controller-demo-vaikas/demo@sha256:d4b4d0520799888721e7b0dfb1110ffb6b3684353f45aa1e87fb6b027643f7d7: no matching signatures:
We can fix this issue by signing our image with the following Cosign command. Remember to put in your password if you specified one.
COSIGN_PASSWORD="" cosign sign --key cosign.key --yes --allow-insecure-registry ${demoimage}
You should receive output similar to this:
vaikas@vaikas-MBP policy-controller % COSIGN_PASSWORD="" cosign sign --key cosign.key --yes --allow-insecure-registry ${demoimage}
Note that there may be personally identifiable information associated with this signed artifact.
This may include the email address associated with the account with which you authenticate.
This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later.
By typing 'y', you attest that you grant (or have permission to grant) and agree to have this information stored permanently in transparency logs.
tlog entry created with index: 14992839
Pushing signature to: ttl.sh/policy-controller-demo-vaikas/demo
Alrighty, we should now be good to go! The image is coming from a trusted registry and has been signed by our CI/CD, so it should pass both policies. Let’s try running it again.
kubectl -n pc-demo run testimage --image=$demoimage
Your image should successfully run. 🙂 You should receive output similar to this:
vaikas@vaikas-MBP policy-controller % kubectl -n pc-demo run testimage --image=$demoimage
pod/testimage created
vaikas@vaikas-MBP policy-controller % kubectl -n pc-demo logs testimage
hello world policy-controller
Wrap Up
Hooray! You have now learned how to run the Sigstore policy-controller on a Kubernetes cluster and implement policies that set conditions for running images. In particular, you created a policy that requires that the image come from a registry on your `allow list` and a policy that requires that the image is signed by your CI/CD system.
Note, one nice aspect about the policy-controller is the ability to add policies incrementally. We started by adding a policy that restricted our images to an `allow-list`. Once we knew that we could fulfill that policy, we added an additional policy that required signatures.
For many folks, it may be difficult or near impossible to implement all desired policies at once without putting your project at risk. By implementing policies one at a time, you can safely test their effects one by one and add additional policies as you are ready. Another option is to implement the policy-controller’s `warn` feature, which sends a warning (rather than restricting an image) in the event an image fails a policy. This option may be suitable for folks who want to understand the potential effects of policies before allowing them to make consequential decisions. Chainguard recently announced it is open sourcing a new policy catalog that is compatible with Sigstore policy-controller and can be adopted incrementally to improve the security of your software supply chain. Check out this demo video to learn how to get started:
For detailed tutorials on policy-controller installation and setting up other policies, check out our policy-controller tutorials on Chainguard Academy. These tutorials will walk you through alll the steps to implement policies on your cluster or try the policy-controller out with our interactive browser terminal. You can also follow Unchained for more posts about the policy-controller’s features and policies that you can use to keep your clusters safe.
Ready to Lock Down Your Supply Chain?
Talk to our customer obsessed, community-driven team.