Containers

Expose OpenShift Internal Registry Externally

OpenShift ships with an internal container image registry that runs inside the cluster. By default, this registry is only accessible from within the cluster – pods can pull and push images, but external CI/CD pipelines, developer workstations, and build systems cannot reach it. Exposing the registry externally lets you push images from outside the cluster and integrate with tools that need direct registry access.

Original content from computingforgeeks.com - post 70550

This guide walks through exposing the OpenShift internal registry via a route, configuring TLS, authenticating with Podman and Docker, pushing and pulling images, setting up image pruning, sizing registry storage, and troubleshooting common access issues. These steps apply to OpenShift Container Platform 4.14+ running on any infrastructure – bare metal, VMware, or cloud. For the full upstream documentation, see the OpenShift registry exposure guide.

Prerequisites

  • A running OpenShift 4.14+ cluster with cluster-admin access
  • The oc CLI tool installed and configured to connect to the cluster
  • Podman 4.x+ or Docker installed on your workstation for image operations
  • DNS configured so *.apps.<cluster_domain> resolves to your cluster’s ingress (router) IP
  • Port 443/TCP open between your workstation and the OpenShift router
  • A persistent volume claim (PVC) bound to the registry – the registry must have storage configured before exposing it

Step 1: Check the Registry Operator Configuration

Before exposing the registry, confirm it is running and in a healthy state. The Image Registry Operator manages the registry lifecycle. On bare metal and some infrastructure platforms, the operator starts in Removed state and needs to be switched to Managed.

Check the current management state of the registry operator:

oc get configs.imageregistry.operator.openshift.io/cluster -o jsonpath='{.spec.managementState}'

If the output shows Managed, the registry is active and you can proceed. If it shows Removed, enable it:

oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"managementState":"Managed"}}' --type=merge

Verify the registry pods are running in the openshift-image-registry namespace:

oc get pods -n openshift-image-registry -l docker-registry=default

You should see at least one image-registry pod in Running state:

NAME                              READY   STATUS    RESTARTS   AGE
image-registry-6f4b4db789-2wdmt   1/1     Running   0          3d

If no pods appear, check the operator logs with oc logs deployment/cluster-image-registry-operator -n openshift-image-registry to identify the issue – typically missing storage configuration.

Step 2: Expose the Internal Registry with a Default Route

OpenShift provides a built-in mechanism to expose the registry through the cluster’s default ingress router. This creates a route with automatic TLS termination using the router’s wildcard certificate.

Enable the default route by patching the Image Registry Operator config:

oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge

The operator confirms the patch was applied:

config.imageregistry.operator.openshift.io/cluster patched

Verify the route was created in the openshift-image-registry namespace:

oc get route -n openshift-image-registry

The output shows the auto-generated route with reencrypt TLS termination:

NAME            HOST/PORT                                                    PATH   SERVICES         PORT    TERMINATION   WILDCARD
default-route   default-route-openshift-image-registry.apps.ocp.example.com          image-registry   <all>   reencrypt     None

The route hostname follows the pattern default-route-openshift-image-registry.apps.<cluster_domain>. Save this hostname for later use:

HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')

Confirm you can reach the registry endpoint from your workstation:

curl -kIs https://$HOST/healthz

A healthy registry returns HTTP 200:

HTTP/2 200
content-type: application/json; charset=utf-8

Step 3: Configure a Custom TLS Certificate for the Registry Route

The default route uses the cluster’s wildcard certificate. For production environments, you may want a dedicated certificate signed by your organization’s CA or a public CA like Let’s Encrypt. This eliminates the need for --tls-verify=false when clients connect.

If you have a custom certificate and key, create a TLS secret in the openshift-image-registry namespace:

oc create secret tls registry-tls --cert=/path/to/tls.crt --key=/path/to/tls.key -n openshift-image-registry

Then patch the registry operator to use a custom route with the TLS secret instead of the default route:

oc patch configs.imageregistry.operator.openshift.io/cluster --type=merge --patch '{"spec":{"defaultRoute":false,"routes":[{"name":"custom-registry-route","hostname":"registry.apps.ocp.example.com","secretName":"registry-tls"}]}}'

Verify the custom route appears:

oc get route -n openshift-image-registry

The output should show your custom hostname:

NAME                      HOST/PORT                          PATH   SERVICES         PORT    TERMINATION   WILDCARD
custom-registry-route     registry.apps.ocp.example.com             image-registry   <all>   reencrypt     None

If you prefer to keep the default route and your cluster’s wildcard certificate is already trusted by your clients, you can skip this step entirely – the default route works well for development and internal environments.

Step 4: Login to the Registry with Podman or Docker

External clients authenticate to the OpenShift registry using an OAuth token. First, log in to the cluster with oc to obtain a token, then pass it to your container tool. If you have an existing OpenShift user configured with HTPasswd, use those credentials.

Log in to the OpenShift cluster:

oc login https://api.ocp.example.com:6443 -u admin

Get the registry route hostname if you have not already:

HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')

Log in to the registry with Podman using the OAuth token:

podman login -u $(oc whoami) -p $(oc whoami -t) $HOST

If you are using a self-signed or untrusted certificate, add --tls-verify=false:

podman login -u $(oc whoami) -p $(oc whoami -t) --tls-verify=false $HOST

A successful login shows:

Login Succeeded!

For Docker, the process is the same:

docker login -u $(oc whoami) -p $(oc whoami -t) $HOST

The OAuth token expires when your oc session ends. For long-running CI/CD pipelines, create a service account with registry-editor or registry-viewer role and use its token instead:

oc create sa ci-pipeline -n myproject
oc adm policy add-role-to-user registry-editor -z ci-pipeline -n myproject
oc create token ci-pipeline -n myproject --duration=87600h

Use the generated token with podman login -u ci-pipeline -p <token> $HOST in your pipeline.

Step 5: Push Images to the OpenShift Internal Registry

With authentication in place, you can push container images from your workstation to the OpenShift internal registry. Images are stored under a project (namespace) path. You need edit or admin role in the target project to push images.

First, create a project if you do not have one:

oc new-project myapp

Pull a test image, tag it for the OpenShift registry, and push it. The registry path format is $HOST/<project>/<image>:<tag>:

podman pull docker.io/library/nginx:alpine

Tag the image for the OpenShift registry under your project namespace:

podman tag docker.io/library/nginx:alpine $HOST/myapp/nginx:alpine

Push the image to the registry:

podman push $HOST/myapp/nginx:alpine --tls-verify=false

After the push completes, OpenShift automatically creates an ImageStream in the target project. Verify it:

oc get imagestream -n myapp

The image stream appears with the pushed tag:

NAME    IMAGE REPOSITORY                                                        TAGS     UPDATED
nginx   default-route-openshift-image-registry.apps.ocp.example.com/myapp/nginx   alpine   2 seconds ago

You can now reference this image in your deployments using the internal registry URL image-registry.openshift-image-registry.svc:5000/myapp/nginx:alpine – pods within the cluster use the internal service address, not the external route.

Step 6: Pull Images from the Registry Externally

To pull images from outside the cluster, you need registry-viewer role in the target project. This is useful for inspecting images or pulling them to a different environment.

Grant view access to a user or service account if needed:

oc adm policy add-role-to-user registry-viewer developer -n myapp

Pull the image from the external route:

podman pull $HOST/myapp/nginx:alpine --tls-verify=false

List the pulled image to confirm:

podman images | grep nginx

The output shows the image pulled from the OpenShift registry:

REPOSITORY                                                                TAG     IMAGE ID      CREATED       SIZE
default-route-openshift-image-registry.apps.ocp.example.com/myapp/nginx   alpine  a8758716bb6a  2 weeks ago   43.2 MB

For cross-cluster image sharing, you can also configure registry pull secrets in other Kubernetes or OpenShift clusters to pull directly from this registry.

Step 7: Configure Image Pruning

As you push images over time, old and unused image layers accumulate and consume storage. OpenShift includes an automatic image pruner that runs as a CronJob. Configure it to keep your registry storage under control.

Check the current pruner configuration:

oc get imagepruner cluster -o yaml

The default pruner keeps images newer than 60 minutes and retains 3 tag revisions. To adjust these settings – for example, keeping 5 tag revisions and pruning images older than 24 hours:

oc patch imagepruner cluster --type=merge --patch '{"spec":{"keepTagRevisions":5,"keepYoungerThanDuration":"24h","schedule":"0 2 * * *","suspend":false}}'

Verify the pruner CronJob exists and is scheduled:

oc get cronjob -n openshift-image-registry

The CronJob should show with the schedule you configured:

NAME            SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
image-pruner    0 2 * * *   False     0        12h             30d

To run a manual prune immediately:

oc create job --from=cronjob/image-pruner manual-prune -n openshift-image-registry

Check the job status to confirm the prune completed:

oc get jobs -n openshift-image-registry -l job-name=manual-prune

Step 8: Registry Storage and PVC Sizing

The internal registry requires persistent storage. Without a PVC, the registry uses emptyDir and loses all images when the pod restarts. For production, always configure a PVC with enough capacity for your image workload.

Check the current storage configuration:

oc get configs.imageregistry.operator.openshift.io/cluster -o jsonpath='{.spec.storage}' | python3 -m json.tool

If storage shows emptyDir, configure a PVC. First, ensure a StorageClass is available:

oc get storageclass

Patch the registry to use PVC storage. The claim name is auto-generated if left empty:

oc patch configs.imageregistry.operator.openshift.io/cluster --type=merge --patch '{"spec":{"storage":{"pvc":{"claim":""}}}}'

Verify the PVC was created and is bound:

oc get pvc -n openshift-image-registry

A healthy PVC shows Bound status with the requested capacity:

NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
image-registry-storage   Bound    pvc-a07963ea-2b23-477f-936d-4f8f674de9a5   100Gi      RWX            cephfs         30d

For PVC sizing, consider these guidelines:

  • Small teams (under 20 developers) – 100Gi handles most workloads with regular pruning
  • Medium teams (20-100 developers) – 250-500Gi for active CI/CD with frequent builds
  • Large teams or monorepos – 1Ti+ if you build many large images or rarely prune
  • Access mode – use RWX (ReadWriteMany) if running multiple registry replicas for high availability, RWO (ReadWriteOnce) for single replica

Monitor storage usage to avoid filling the PVC, which causes push failures:

oc exec -n openshift-image-registry deployment/image-registry -- df -h /registry

Step 9: Troubleshoot Registry Access Issues

Several common issues can block external access to the registry. Here are the most frequent problems and their fixes.

x509: certificate signed by unknown authority

This error means your client does not trust the certificate used by the route. Extract the CA certificate from the cluster and add it to your system trust store:

oc get secret router-ca -n openshift-ingress-operator -o jsonpath='{.data.tls\.crt}' | base64 -d > /tmp/openshift-ca.crt

On Linux, copy the CA to the system trust store and update:

sudo cp /tmp/openshift-ca.crt /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust

On macOS, add the certificate to the Keychain:

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain /tmp/openshift-ca.crt

After updating the trust store, retry the login without --tls-verify=false.

401 Unauthorized or authentication required

This means the OAuth token is expired or the user lacks permissions. Re-authenticate with oc login to get a fresh token, then run podman login again. Check that the user has the correct role in the target namespace:

oc get rolebinding -n myapp | grep registry

If no bindings exist, grant the appropriate role:

oc adm policy add-role-to-user registry-editor developer -n myapp

Push fails with 500 Internal Server Error

This usually indicates a storage issue – the PVC is full or the storage backend is unhealthy. Check the registry pod logs and storage usage:

oc logs deployment/image-registry -n openshift-image-registry --tail=50

Check available storage on the PVC:

oc exec -n openshift-image-registry deployment/image-registry -- df -h /registry

If storage is full, run an immediate prune (see Step 7) or expand the PVC if your StorageClass supports volume expansion.

Route not resolving or connection timeout

Verify DNS resolves the route hostname to your cluster’s ingress IP:

dig +short default-route-openshift-image-registry.apps.ocp.example.com

If DNS does not resolve, add a wildcard DNS entry for *.apps.<cluster_domain> pointing to the router’s external IP. Also confirm port 443 is reachable:

curl -kIs https://default-route-openshift-image-registry.apps.ocp.example.com/healthz

If you are behind a corporate firewall, ensure port 443/TCP is allowed to the OpenShift cluster ingress controller. For environments using Project Quay as an external registry, the internal registry exposure may not be needed.

Conclusion

You now have the OpenShift internal registry exposed externally through a route, with TLS encryption, and your clients authenticated via OAuth tokens. External CI/CD systems and developer workstations can push and pull images directly to the cluster registry without needing an external registry service.

For production hardening, use a trusted TLS certificate, configure image pruning to prevent storage exhaustion, set up role-based access per project to limit who can push images, and monitor registry pod health and PVC usage through your cluster monitoring stack. See the OpenShift Image Registry Operator documentation for advanced configuration options including replica scaling and S3-compatible storage backends.

Related Articles

Containers How To Integrate Multiple Kubernetes Clusters to Vault Server Containers How To Run UniFi Controller in Docker Container Containers Install Google Cloud SDK on Linux Mint 22 | Ubuntu 24.04 Containers RKE2 Single Node Kubernetes Setup on Rocky Linux 10

Leave a Comment

Press ESC to close