I spent most of last week chasing image pull failures in a multi-tenant cluster. It turned out the problem was our private registry mirror. We were using it as a pull-through cache, but the credentials lived on the nodes. One team rotated their credentials and, a few minutes later, pods in three other namespaces started failing too. That was the moment it became obvious we had a shared-credentials problem.
That sent me down the rabbit hole of CRI-O’s credential provider for registry mirrors. After setting it up, I do not really want to go back.
The problem I was dealing with
We run a private Harbor instance as a pull-through cache. It helps with egress costs, pull times, and our air-gapped setup. Nothing unusual there. The real issue is that Kubernetes does not know anything about mirrors. The mirror config lives in /etc/containers/registries.conf on the node, completely outside the Kubernetes API.
That means if the mirror needs authentication, the easy option is to put the credentials on the node as well. And once you do that, every namespace and every pod on that node can potentially benefit from the same credentials. On a multi-tenant platform, that is hard to justify.
We also tried to make imagePullSecrets do the job, but they only apply to the source registry, not the mirror. When CRI-O rewrites the pull to the mirror, the kubelet does not forward those credentials.
Enter CRI-O’s credential provider
Starting with Kubernetes 1.33 and CRI-O 1.34, there is a proper way to handle this. CRI-O ships a credential provider plugin that can read standard Kubernetes Secrets and use them for mirror authentication. The key piece is the KubeletServiceAccountTokenForCredentialProviders feature gate.
This is the setup that worked for me.
Step 1: Install the credential provider binary
Download the binary from the CRI-O credential provider releases and drop it on every node:
curl -L -o /usr/local/bin/crio-credential-provider \
https://github.com/cri-o/crio-credential-provider/releases/latest/download/crio-credential-provider-linux-amd64
chmod +x /usr/local/bin/crio-credential-provider
Step 2: Configure the kubelet
Add the credential provider config to your kubelet configuration:
# /etc/kubernetes/credential-provider.yaml
apiVersion: kubelet.config.k8s.io/v1
kind: CredentialProviderConfig
providers:
- name: crio-credential-provider
matchImages:
- "mirror.internal.company.com/*"
defaultCacheDuration: "10m"
apiVersion: credentialprovider.kubelet.k8s.io/v1
args:
- --mirror-registry=mirror.internal.company.com
Then reference it in your kubelet args:
--image-credential-provider-config=/etc/kubernetes/credential-provider.yaml
--image-credential-provider-bin-dir=/usr/local/bin/
Step 3: Enable the feature gate
--feature-gates=KubeletServiceAccountTokenForCredentialProviders=true
Step 4: Create namespace-scoped secrets
This is where it starts to feel sane again. Each team can create its own registry secret in its own namespace:
kubectl create secret docker-registry mirror-creds \
--namespace=team-alpha \
--docker-server=mirror.internal.company.com \
--docker-username=team-alpha-svc \
--docker-password='s3cret' \
--docker-email=[email protected]
Reference it in the service account or pod spec like you normally would:
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: team-alpha
imagePullSecrets:
- name: mirror-creds
What actually happens under the hood
When a pod requests an image, the flow looks like this:
- The kubelet sees the image reference and checks if any credential provider matches
- CRI-O’s credential provider gets called with the mirror URL
- The provider looks up the pod’s service account and its associated
imagePullSecrets - It returns the credentials scoped to that specific namespace
- CRI-O uses those credentials to pull from the mirror
The important part is that the credentials never cross the namespace boundary. Team Alpha’s mirror credentials stay invisible to Team Beta. No more shared node-level secrets.
Gotchas I ran into
Cache duration matters. I first set defaultCacheDuration to 1h because I wanted fewer API calls. That looked fine on paper and was annoying in practice. When someone rotated credentials, the new ones could take up to an hour to kick in. I ended up with 10m, which has been a much better balance.
The feature gate is not optional. Without KubeletServiceAccountTokenForCredentialProviders, the kubelet will not pass service account context to the credential provider. Pulls just fail with a generic auth error, which is not especially helpful if you do not already know what is missing.
Mirror routing is still a node concern. This solves authentication, not mirror routing. You still need /etc/containers/registries.conf configured on every node:
[[registry]]
prefix = "docker.io"
location = "docker.io"
[[registry.mirror]]
location = "mirror.internal.company.com/docker-hub"
This is CRI-O specific for now. The credential provider here comes from the CRI-O project. If you are running containerd, the shape of the solution is different. Containerd has its own credential provider support, but mirror-aware namespace-scoped auth still feels less mature there.
Was it worth it?
Absolutely. Before this change, we had one set of mirror credentials shared across 12 namespaces. A single compromised workload could have reached every team’s cached images. Now each team owns its own credentials, rotations are independent, and our compliance audits are much less interesting.
The rollout took about half a day per cluster, and we have three of them. Most of that time went into testing the feature gate rollout and checking that existing workloads kept pulling images cleanly during the migration.
If you are running private registry mirrors in a multi-tenant Kubernetes cluster, I think this is worth the effort. The security improvement is enough on its own, and the fact that teams can finally manage their own credentials without involving the node is a very nice side effect.