You can support us by downloading this article as PDF from the Link below. Download the guide as PDF

The Kubernetes Metrics Server is a cluster-wide aggregator of resource usage data. Its work is to collect metrics from the Summary API, exposed by Kubelet on each node. Resource usage metrics, such as container CPU and memory usage are helpful when troubleshooting weird resource utilization. All these metrics are available in Kubernetes through the Metrics API.

The Metrics API has the amount of resource currently used by a given node or a given pod. Since it doesn’t store the metric values, Metrics Server is used for this purpose. The deployment yamls files are provided for installation in the Metrics Server project source code.

Metrics Server Requirements

Metrics Server has specific requirements for cluster and network configuration. These requirements aren’t the default for all cluster distributions. Please ensure that your cluster distribution supports these requirements before using Metrics Server:

Deploy Metrics Server to Kubernetes

Download manifest file.

wget kubectl apply -f

Modify the settings to your liking by editing the file.

vim components.yaml

Once you have made the customization you need, deploy metrics-server in your Kubernetes cluster. Switch to correct cluster if you have multiple Kubernetes clusters: Easily Manage Multiple Kubernetes Clusters with kubectl & kubectx.

Apply Metrics Server manifests which are available on Metrics Server releases making them installable via url:

kubectl apply -f components.yaml

Here is the output of the resources being created. created created created created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created created created

Use the following command to verify that the metrics-server deployment is running the desired number of pods:

$ kubectl get deployment metrics-server -n kube-system

metrics-server   1/1     1            1           7m23s

$ kubectl get pods -n kube-system | grep metrics

metrics-server-7cb45bbfd5-kbrt7   1/1     Running   0          8m42s

Confirm Metrics server is active.

$ kubectl get apiservice -o yaml

kind: APIService
  annotations: |
  creationTimestamp: "2020-08-12T11:27:13Z"
  resourceVersion: "130943"
  selfLink: /apis/
  uid: 83c44e41-6346-4dff-8ce2-aff665199209
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
    name: metrics-server
    namespace: kube-system
    port: 443
  version: v1beta1
  versionPriority: 100
  - lastTransitionTime: "2020-08-12T11:27:18Z"
    message: all checks passed
    reason: Passed
    status: "True"
    type: Available

Metrics API can also be accessed by using the kubectl top command. This makes it easier to debug autoscaling pipelines.

$ kubectl top --help
Display Resource (CPU/Memory/Storage) usage.

 The top command allows you to see the resource consumption for nodes or pods.

 This command requires Metrics Server to be correctly configured and working on the server.

Available Commands:
  node        Display Resource (CPU/Memory/Storage) usage of nodes
  pod         Display Resource (CPU/Memory/Storage) usage of pods

  kubectl top [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

To display cluster nodes resource usage – CPU/Memory/Storage you’ll run the command:

$ kubectl top nodes
NAME                                            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   50m          2%     445Mi           13%   58m          3%     451Mi           13%

Similar command can be used for pods.

$ kubectl top pods -A
NAMESPACE     NAME                              CPU(cores)   MEMORY(bytes)
kube-system   aws-node-glfrs                    4m           51Mi
kube-system   aws-node-sgh8p                    5m           51Mi
kube-system   coredns-6987776bbd-2mgxp          2m           6Mi
kube-system   coredns-6987776bbd-vdn8j          2m           6Mi
kube-system   kube-proxy-5glzs                  1m           7Mi
kube-system   kube-proxy-hgqm5                  1m           8Mi
kube-system   metrics-server-7cb45bbfd5-kbrt7   1m           11Mi

You can also access use kubectl get –raw to pull raw resource usage metrics for all nodes in the cluster.

$ kubectl get --raw "/apis/" | jq

  "kind": "NodeMetricsList",
  "apiVersion": "",
  "metadata": {
    "selfLink": "/apis/"
  "items": [
      "metadata": {
        "name": "",
        "selfLink": "/apis/",
        "creationTimestamp": "2020-08-12T11:44:41Z"
      "timestamp": "2020-08-12T11:44:17Z",
      "window": "30s",
      "usage": {
        "cpu": "55646953n",
        "memory": "461980Ki"
      "metadata": {
        "name": "",
        "selfLink": "/apis/",
        "creationTimestamp": "2020-08-12T11:44:41Z"
      "timestamp": "2020-08-12T11:44:09Z",
      "window": "30s",
      "usage": {
        "cpu": "47815890n",
        "memory": "454944Ki"

Other Customizations

These are the extra customizations that can be done before installing Metrics Server on Kubernetes.

Setting Flags

Metrics Server supports all the standard Kubernetes API server flags, as well as the standard Kubernetes glog logging flags. The most commonly-used ones are:

  • --logtostderr: log to standard error instead of files in the container. You generally want this on.
  • --v=<X>: set log verbosity. It’s generally a good idea to run a log level 1 or 2 unless you’re encountering errors. At log level 10, large amounts of diagnostic information will be reported, include API request and response bodies, and raw metric results from Kubelet.
  • --secure-port=<port>: set the secure port. If you’re not running as root, you’ll want to set this to something other than the default (port 443).
  • --tls-cert-file, --tls-private-key-file: the serving certificate and key files. If not specified, self-signed certificates will be generated. Use non-self-signed certificates in production.
  • --kubelet-certificate-authority: the path of the CA certificate to use for validate the Kubelet’s serving certificates.

Other flags to change Metrics Server behavior are:

  • --metric-resolution=<duration>: Interval at which metrics are scraped from Kubelets (defaults to 60s).
  • --kubelet-insecure-tls: skip verifying Kubelet CA certificates.
  • --kubelet-port: Port used to connect to the Kubelet (defaults to the default secure Kubelet port, 10250).
  • --kubelet-preferred-address-types: Order to consider Kubelet node address types when connecting to Kubelet.

Setting node address types order

I’ll modify the deployment manifest file to add the order in which to consider different Kubelet node address types when connecting to Kubelet.

vim components.yaml

Modify like below:

      - name: metrics-server
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

Disabling insecure CA certificates verification

If you’re using self signed certificates, you can use –kubelet-insecure-tls flag to skip verifying Kubelet CA certificates.

      - name: metrics-server
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

Test Metrics server installation

Lets display resource usage of Nodes – CPU/Memory/Storage:

$ kubectl top nodes
NAME                                  CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%     196m         4%     1053Mi          14%     107m         2%     2080Mi          27%     107m         2%     2080Mi          27%     107m         2%     2080Mi          27%  

We can do same for pods – Show metrics for all pods in the default namespace

$ kubectl top pods
NAMESPACE     NAME                                                        CPU(cores)   MEMORY(bytes)   
kube-system   calico-kube-controllers-5c45f5bd9f-dk8jp                    1m           11Mi            
kube-system   calico-node-4h67w                                           32m          27Mi            
kube-system   calico-node-99vkm                                           35m          27Mi            
kube-system   calico-node-qdqb8                                           21m          27Mi            
kube-system   calico-node-sd9r8                                           21m          43Mi            
kube-system   coredns-6955765f44-d4g99                                    2m           12Mi            
kube-system   coredns-6955765f44-hqc4q                                    2m           11Mi            
kube-system   kube-proxy-h87zf                                            1m           12Mi            
kube-system   kube-proxy-lcnvx                                            1m           14Mi            
kube-system   kube-proxy-x6tfx                                            1m           16Mi            
kube-system   kube-proxy-xplz4                                            1m           16Mi            
kube-system   metrics-server-7bd949b8b6-mpmk9                             1m           10Mi        

Fore more command options check:

kubectl top pod --help
kubectl top node --help

Kubernetes mastery courses:

in stock
in stock

Check other Kubernetes guides:

How To Manually Pull Container images used by Kubernetes kubeadm

Best Books To learn Docker and Ansible Automation

Create Kubernetes Service / User Account and restrict it to one Namespace with RBAC

You can support us by downloading this article as PDF from the Link below. Download the guide as PDF