Red Hat Advanced Cluster Management (ACM) for Kubernetes is a multi-cluster management platform that gives you a single control plane to manage, observe, and govern your OpenShift and Kubernetes clusters across hybrid and multi-cloud environments. It handles cluster lifecycle management, application deployment, security policies, and observability from one unified console.
This guide walks through installing Red Hat ACM 2.13 on an OpenShift 4.14+ cluster using the OperatorHub, creating a MultiClusterHub instance, importing existing clusters, setting up governance policies, managing application lifecycles, and configuring observability with Thanos and Grafana.
Prerequisites
Before you begin, make sure you have the following in place:
- An OpenShift Container Platform 4.14 or later cluster (hub cluster) with cluster-admin access
- At least 3 worker nodes with 8 GB RAM and 4 vCPUs each on the hub cluster
- The
ocCLI tool installed and authenticated to your hub cluster - A valid Red Hat subscription that includes Advanced Cluster Management
- Network connectivity between the hub cluster and any managed clusters you plan to import
- Persistent storage provisioner configured on the hub cluster (for observability components)
Step 1: Install the ACM Operator from OperatorHub
ACM is distributed as an Operator on OpenShift. You install it through the OperatorHub, which handles dependency resolution and lifecycle management automatically. Start by creating a dedicated namespace for ACM components.
Create the open-cluster-management namespace:
oc create namespace open-cluster-management
The namespace is created immediately. Next, create an OperatorGroup in this namespace so the Operator can watch for resources:
cat << 'EOF' | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: acm-operator-group
namespace: open-cluster-management
spec:
targetNamespaces:
- open-cluster-management
EOF
Now create the Subscription resource to install the ACM Operator from the Red Hat catalog:
cat << 'EOF' | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: acm-operator-subscription
namespace: open-cluster-management
spec:
sourceNamespace: openshift-marketplace
source: redhat-operators
channel: release-2.13
installPlanApproval: Automatic
name: advanced-cluster-management
EOF
Wait for the Operator to install. This takes a few minutes as OpenShift pulls the Operator images and creates the necessary resources. Check the installation status:
oc get csv -n open-cluster-management
The output shows the ClusterServiceVersion reaching the Succeeded phase when the Operator is ready:
NAME DISPLAY VERSION REPLACES PHASE
advanced-cluster-management.v2.13.0 Advanced Cluster Management for Kubernetes 2.13.0 Succeeded
If the phase shows Installing, wait a minute and check again. The Operator pods should also be running in the namespace:
oc get pods -n open-cluster-management
You should see the multiclusterhub-operator pod in a Running state:
NAME READY STATUS RESTARTS AGE
multiclusterhub-operator-7d5c8f5b9-k4xrq 1/1 Running 0 2m
Step 2: Create the MultiClusterHub Instance
The Operator itself does not deploy ACM's management components - you need to create a MultiClusterHub custom resource to trigger the full deployment. This resource tells the Operator to deploy the hub services including the console, cluster manager, governance framework, and application lifecycle components.
cat << 'EOF' | oc apply -f -
apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterHub
metadata:
name: multiclusterhub
namespace: open-cluster-management
spec: {}
EOF
The MultiClusterHub deployment takes 10-15 minutes. It creates dozens of pods across multiple components. Monitor the progress:
oc get multiclusterhub -n open-cluster-management -w
Watch for the status to transition from Installing to Running:
NAME STATUS AGE
multiclusterhub Running 12m
Once the status shows Running, verify all pods in the namespace are healthy:
oc get pods -n open-cluster-management | grep -v Running | grep -v Completed
If this returns only the header line, all pods are either Running or Completed, which is the expected state. Any pods stuck in CrashLoopBackOff or Pending indicate resource issues on your hub cluster.
Step 3: Access the ACM Console
ACM deploys its own web console that integrates with the OpenShift console. You can access it through the OpenShift console's application menu or directly via its route. Retrieve the ACM console URL:
oc get route multicloud-console -n open-cluster-management -o jsonpath='{.spec.host}'
The output returns the hostname for the ACM console:
multicloud-console.apps.ocp.example.com
Open https://multicloud-console.apps.ocp.example.com in your browser and log in with your OpenShift cluster-admin credentials. The ACM dashboard shows an overview of all managed clusters, compliance status, and application deployments. You can also access the console from the OpenShift web console by clicking the grid icon in the top navigation bar and selecting "Advanced Cluster Management".
Step 4: Import Existing Clusters into ACM
With ACM running on your hub cluster, you can import existing OpenShift or Kubernetes clusters as managed clusters. Importing a cluster installs a klusterlet agent on the target cluster that communicates back to the hub. This works for any cluster with network access to the hub - including clusters in different clouds or on-premises.
Create a ManagedCluster resource on the hub to start the import process:
cat << 'EOF' | oc apply -f -
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
name: cluster-prod-east
labels:
cloud: Amazon
vendor: OpenShift
environment: production
spec:
hubAcceptsClient: true
EOF
Labels on the ManagedCluster resource are important - they drive policy placement and application targeting later. Use labels like environment, cloud, vendor, and region to organize your clusters.
Next, create the KlusterletAddonConfig to define which add-ons run on the managed cluster:
cat << 'EOF' | oc apply -f -
apiVersion: agent.open-cluster-management.io/v1
kind: KlusterletAddonConfig
metadata:
name: cluster-prod-east
namespace: cluster-prod-east
spec:
clusterName: cluster-prod-east
clusterNamespace: cluster-prod-east
applicationManager:
enabled: true
certPolicyController:
enabled: true
iamPolicyController:
enabled: true
policyController:
enabled: true
searchCollector:
enabled: true
EOF
Now generate the import command that you run on the target cluster. Extract it from the secret that ACM creates:
oc get secret cluster-prod-east-import -n cluster-prod-east -o jsonpath='{.data.crds\.yaml}' | base64 -d > /tmp/klusterlet-crd.yaml
oc get secret cluster-prod-east-import -n cluster-prod-east -o jsonpath='{.data.import\.yaml}' | base64 -d > /tmp/klusterlet-import.yaml
Copy these two files to the target cluster and apply them. On the target cluster, run:
oc apply -f /tmp/klusterlet-crd.yaml
oc apply -f /tmp/klusterlet-import.yaml
After applying, the klusterlet agent on the target cluster establishes a connection back to the hub. Verify the import on the hub cluster:
oc get managedcluster cluster-prod-east
The cluster should show True under the JOINED and AVAILABLE columns:
NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE
cluster-prod-east true True True 5m
Step 5: Create Managed Clusters with ACM
Beyond importing existing clusters, ACM can provision new OpenShift clusters directly on supported cloud providers - AWS, Azure, GCP, VMware vSphere, and bare metal. The cluster creation workflow uses Hive under the hood, which manages the OpenShift installer and cluster lifecycle.
To create a cluster on AWS, you first need to store your cloud credentials as a secret. Create a credentials secret:
cat << 'EOF' | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: aws-credentials
namespace: open-cluster-management
type: Opaque
stringData:
aws_access_key_id: YOUR_ACCESS_KEY
aws_secret_access_key: YOUR_SECRET_KEY
baseDomain: example.com
pullSecret: 'YOUR_PULL_SECRET'
ssh-publickey: 'ssh-rsa YOUR_SSH_KEY'
EOF
Then create a ClusterDeployment along with the required ManagedCluster and KlusterletAddonConfig. The easiest path is through the ACM console - navigate to Infrastructure > Clusters > Create cluster, select your provider, and fill in the cluster details. The console generates all the required YAML resources and handles the provisioning workflow.
For the CLI approach, you need a ClusterDeployment, MachinePool, and Install Config secret. Here is the ClusterDeployment resource:
cat << 'EOF' | oc apply -f -
apiVersion: hive.openshift.io/v1
kind: ClusterDeployment
metadata:
name: cluster-dev-west
namespace: cluster-dev-west
spec:
baseDomain: example.com
clusterName: cluster-dev-west
platform:
aws:
credentialsSecretRef:
name: aws-credentials
region: us-west-2
provisioning:
installConfigSecretRef:
name: cluster-dev-west-install-config
imageSetRef:
name: img4.14.0-x86-64
pullSecretRef:
name: cluster-dev-west-pull-secret
EOF
Monitor the cluster provisioning progress from the hub:
oc get clusterdeployment cluster-dev-west -n cluster-dev-west -w
Provisioning a new cluster typically takes 30-45 minutes. The ClusterDeployment resource tracks the progress through stages: initializing, provisioning, and installed.
Step 6: Configure Policies and Governance
ACM's governance framework lets you define policies that enforce security, configuration, and compliance standards across all managed clusters. Policies use a templating engine that evaluates conditions on target clusters and reports violations back to the hub. You can set policies to inform (alert only) or enforce (auto-remediate).
Here is a policy that checks whether namespace resource quotas exist on managed clusters:
cat << 'EOF' | oc apply -f -
apiVersion: policy.open-cluster-management.io/v1
kind: Policy
metadata:
name: policy-namespace-quota
namespace: open-cluster-management
annotations:
policy.open-cluster-management.io/standards: NIST SP 800-53
policy.open-cluster-management.io/categories: CM Configuration Management
policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
spec:
remediationAction: inform
disabled: false
policy-templates:
- objectDefinition:
apiVersion: policy.open-cluster-management.io/v1
kind: ConfigurationPolicy
metadata:
name: check-resource-quotas
spec:
remediationAction: inform
severity: medium
object-templates:
- complianceType: musthave
objectDefinition:
apiVersion: v1
kind: ResourceQuota
metadata:
name: default-quota
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
EOF
Policies by themselves do not target clusters - you need a PlacementBinding and Placement to define which clusters the policy applies to:
cat << 'EOF' | oc apply -f -
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: placement-production
namespace: open-cluster-management
spec:
predicates:
- requiredClusterSelector:
labelSelector:
matchLabels:
environment: production
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-policy-quota
namespace: open-cluster-management
spec:
placementRef:
apiGroup: cluster.open-cluster-management.io
kind: Placement
name: placement-production
subjects:
- apiGroup: policy.open-cluster-management.io
kind: Policy
name: policy-namespace-quota
EOF
This placement targets all clusters labeled environment: production. Check the compliance status of the policy across your clusters:
oc get policy policy-namespace-quota -n open-cluster-management -o yaml | grep -A5 status
The status section shows each cluster's compliance state - either Compliant or NonCompliant. You can view a consolidated governance dashboard in the ACM console under Governance > Policies.
Step 7: Application Lifecycle Management
ACM provides an application model that deploys workloads across multiple clusters from a single definition. Applications in ACM reference Git repositories, Helm charts, or object storage buckets as sources, and use placement rules to determine which clusters receive the deployment. This is similar to what ArgoCD provides but with built-in multi-cluster awareness.
Create a Channel resource that points to your Git repository:
cat << 'EOF' | oc apply -f -
apiVersion: apps.open-cluster-management.io/v1
kind: Channel
metadata:
name: my-app-channel
namespace: open-cluster-management
spec:
type: Git
pathname: https://github.com/your-org/your-app-manifests.git
EOF
Next, create a Subscription that references the channel and defines which branch and path to use:
cat << 'EOF' | oc apply -f -
apiVersion: apps.open-cluster-management.io/v1
kind: Subscription
metadata:
name: my-app-subscription
namespace: my-app
annotations:
apps.open-cluster-management.io/git-branch: main
apps.open-cluster-management.io/git-path: deploy/
labels:
app: my-app
spec:
channel: open-cluster-management/my-app-channel
placement:
placementRef:
kind: Placement
name: placement-production
EOF
The Subscription watches the Git repository and automatically deploys any Kubernetes manifests found in the deploy/ directory of the main branch to all clusters matched by the Placement rule. When you push changes to the repository, ACM detects them and rolls out updates to all target clusters.
Verify the application status:
oc get subscriptions.apps.open-cluster-management.io my-app-subscription -n my-app -o yaml | grep -A10 status
The ACM console provides a topology view under Applications that visualizes the application components and their deployment status across clusters.
Step 8: Configure Observability with Thanos and Grafana
ACM's observability stack collects metrics from all managed clusters and aggregates them on the hub using Thanos for long-term storage and Grafana for dashboards. This gives you a single-pane view of resource usage, cluster health, and application performance across your entire fleet. Observability is not enabled by default - you need to create a MultiClusterObservability resource.
First, create a secret with your object storage configuration. Thanos requires an S3-compatible bucket for metric retention. Create the open-cluster-management-observability namespace and the secret:
oc create namespace open-cluster-management-observability
Copy the pull secret from the openshift-config namespace so the observability pods can pull images:
DOCKER_CONFIG_JSON=$(oc extract secret/pull-secret -n openshift-config --to=-)
oc create secret generic multiclusterhub-operator-pull-secret \
-n open-cluster-management-observability \
--from-literal=.dockerconfigjson="$DOCKER_CONFIG_JSON" \
--type=kubernetes.io/dockerconfigjson
Create the Thanos object storage configuration secret. This example uses AWS S3:
cat << 'EOF' | oc apply -f -
apiVersion: v1
kind: Secret
metadata:
name: thanos-object-storage
namespace: open-cluster-management-observability
type: Opaque
stringData:
thanos.yaml: |
type: s3
config:
bucket: acm-observability-metrics
endpoint: s3.us-east-1.amazonaws.com
insecure: false
access_key: YOUR_ACCESS_KEY
secret_key: YOUR_SECRET_KEY
EOF
Now create the MultiClusterObservability resource to deploy the full observability stack:
cat << 'EOF' | oc apply -f -
apiVersion: observability.open-cluster-management.io/v1beta2
kind: MultiClusterObservability
metadata:
name: observability
spec:
observabilityAddonSpec: {}
storageConfig:
metricObjectStorage:
name: thanos-object-storage
key: thanos.yaml
statefulSetSize: 10Gi
retentionConfig:
retentionResolutionRaw: 14d
retentionResolution5m: 30d
retentionResolution1h: 90d
EOF
The observability deployment takes 5-10 minutes. It deploys Thanos components (compactor, querier, receiver, store gateway), Grafana, and Alertmanager on the hub. Each managed cluster gets a metrics-collector sidecar that forwards data to the hub. Verify the deployment:
oc get pods -n open-cluster-management-observability
You should see multiple pods for each Thanos component and Grafana in Running state. Access the Grafana dashboard from the ACM console by navigating to Infrastructure > Clusters and clicking the Grafana link, or retrieve the route directly:
oc get route grafana -n open-cluster-management-observability -o jsonpath='{.spec.host}'
Grafana comes pre-loaded with dashboards for cluster health, resource utilization, and Kubernetes API server metrics across all managed clusters. You can also configure custom Grafana dashboards and alerts based on Thanos-aggregated metrics.
Step 9: Cluster Upgrade Management
ACM simplifies OpenShift cluster upgrades by letting you manage version updates across your fleet from the hub. You can stage upgrades, apply them to groups of clusters using labels, and monitor progress from the ACM console. The ClusterCurator resource automates pre-upgrade checks, the upgrade itself, and post-upgrade validation using Ansible hooks.
Create a ClusterCurator to orchestrate an upgrade for a managed cluster:
cat << 'EOF' | oc apply -f -
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: ClusterCurator
metadata:
name: cluster-prod-east
namespace: cluster-prod-east
spec:
desiredCuration: upgrade
upgrade:
desiredUpdate: 4.14.12
channel: stable-4.14
upstream: https://api.openshift.com/api/upgrades_info/v1/graph
EOF
The ClusterCurator triggers the upgrade on the target cluster and reports progress back. Monitor the upgrade status:
oc get clustercurator cluster-prod-east -n cluster-prod-east -o yaml | grep -A20 conditions
For batch upgrades across multiple clusters, use the ACM console. Navigate to Infrastructure > Clusters, select the clusters you want to upgrade, and click Actions > Upgrade. You can stage upgrades by environment - update development clusters first, validate, then proceed to staging and production.
To view the upgrade channels and available versions for your managed clusters, run:
oc get managedclusterinfo -A -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.distributionInfo.ocp.version}{"\n"}{end}'
This lists each managed cluster alongside its current OpenShift version, making it easy to identify clusters that need upgrades.
Conclusion
You now have Red Hat Advanced Cluster Management running on your OpenShift hub cluster with the ability to import clusters, provision new ones, enforce governance policies, manage application deployments, monitor fleet-wide metrics, and orchestrate upgrades. For production environments, configure RBAC to restrict ACM access by team, enable the cluster logging operator alongside observability for complete visibility, and use policy sets to group related compliance checks into manageable bundles.