
Are you encountering the “error: Metrics API not available” after setting up Metrics Server in your Kubernetes cluster? Metrics Server is used in Kubernetes to collect resource usage data from pods and nodes in the cluster. The metrics collected are accessed centrally via the Kubernetes API, and this allows other cluster components and users to query resource usage in the cluster. The use case can be Horizontal Pod Autoscaling.
To resolve “error: Metrics API not available” in Kubernetes, let’s first uninstall Metrics server if you have it modified.
kubectl delete -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Uninstallation output:
serviceaccount "metrics-server" deleted
clusterrole.rbac.authorization.k8s.io "system:aggregated-metrics-reader" deleted
clusterrole.rbac.authorization.k8s.io "system:metrics-server" deleted
rolebinding.rbac.authorization.k8s.io "metrics-server-auth-reader" deleted
clusterrolebinding.rbac.authorization.k8s.io "metrics-server:system:auth-delegator" deleted
clusterrolebinding.rbac.authorization.k8s.io "system:metrics-server" deleted
service "metrics-server" deleted
deployment.apps "metrics-server" deleted
apiservice.apiregistration.k8s.io "v1beta1.metrics.k8s.io" deleted
Fix by patching the deployment
Install Metrics server by running the following commands:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Check metrics server deployment in yaml output:
kubectl -n kube-system get deployment metrics-server -o yaml
We will patch the deployment to have the following settings:
- Add
--kubelet-insecure-tls
argument to containers args – used to skip verifying Kubelet CA certificates. - Change the container port from 10250 to port 4443
- Add
hostNetwork: true
Execute the commands to patch the deployment. You can the following commands directly from Github Gist.
kubectl patch deployment metrics-server -n kube-system --type='json' -p='[
{
"op": "add",
"path": "/spec/template/spec/hostNetwork",
"value": true
},
{
"op": "replace",
"path": "/spec/template/spec/containers/0/args",
"value": [
"--cert-dir=/tmp",
"--secure-port=4443",
"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
"--kubelet-use-node-status-port",
"--metric-resolution=15s",
"--kubelet-insecure-tls"
]
},
{
"op": "replace",
"path": "/spec/template/spec/containers/0/ports/0/containerPort",
"value": 4443
}
]'
After some seconds the pod status should be running and active:
$ kubectl -n kube-system get pods -l k8s-app=metrics-server
NAME READY STATUS RESTARTS AGE
metrics-server-58fb664478-n4rdj 1/1 Running 0 1m
Check the metrics API status:
$ kubectl get apiservices -l k8s-app=metrics-server
NAME SERVICE AVAILABLE AGE
v1beta1.metrics.k8s.io kube-system/metrics-server True 2m
Test if the metrics server is running by checking your kubernetes nodes utilization
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8smas01.novalocal 183m 2% 2231Mi 16%
k8smas02.novalocal 376m 4% 1974Mi 14%
k8smas03.novalocal 289m 3% 1872Mi 13%
Manually modify manifest file
Download the Metrics Server deployment manifest from Github to your local machine
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml
If you don’t have wget you can also download using curl:
curl -sSL -o metrics-server-components.yaml https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Open the file for edit using vim or nano:
- Open using vim
vim metrics-server-components.yaml
- Open using nano
nano metrics-server-components.yaml
Set --secure-port
to 4443 Add --kubelet-insecure-tls
to skip verifying Kubelet CA certificates.
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
Change container listening port from 10250 to 4443:
ports:
- containerPort: 4443
Find nodeSelector
and add hostNetwork: true
just before it.
hostNetwork: true
nodeSelector:
kubernetes.io/os: linux
Apply the manifest to create the resources.
kubectl apply -f metrics-server-components.yaml
Expected command execution output:
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
Check pod status after some seconds or minutes:
$ kubectl -n kube-system get pods -l k8s-app=metrics-server
NAME READY STATUS RESTARTS AGE
metrics-server-58fb664478-85sm5 1/1 Running 0 39s
Test if the metrics server API is functional:
$ kubectl get apiservices -l k8s-app=metrics-server
NAME SERVICE AVAILABLE AGE
v1beta1.metrics.k8s.io kube-system/metrics-server True 59s
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8smas01.novalocal 181m 2% 2235Mi 16%
k8smas02.novalocal 146m 1% 1862Mi 13%
k8smas03.novalocal 145m 1% 1857Mi 13%
If you want to check container logs, for example when doing further troubleshooting, first get the POD name:
POD_NAME=$(kubectl -n kube-system get pods -l k8s-app=metrics-server -o jsonpath='{.items[0].metadata.name}' | awk '{gsub(/%$/, ""); print}')
Then use kubectl logs
command to see the pod’s logs:
$ kubectl -n kube-system logs -f $POD_NAME
I0801 23:54:49.157725 1 serving.go:374] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0801 23:54:49.486075 1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
I0801 23:54:49.592398 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0801 23:54:49.592418 1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
I0801 23:54:49.592442 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0801 23:54:49.592455 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0801 23:54:49.592480 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0801 23:54:49.592499 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0801 23:54:49.592806 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key"
I0801 23:54:49.593268 1 secure_serving.go:213] Serving securely on [::]:4443
I0801 23:54:49.593309 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0801 23:54:49.693502 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0801 23:54:49.693517 1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
I0801 23:54:49.693551 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
The modifications and patch commands are available in Github Gist. If you face any issue by following either of the methods drop a comment with the your error logs.