You can support us by downloading this article as PDF from the Link below. Download the guide as PDF

By default, your Kubernetes Cluster will not schedule pods on the control-plane node for security reasons. It is recommended you keep it this way, but for test environments you may want to schedule Pods on control-plane node to maximize resource usage.

If you want to be able to schedule pods on the Kubernetes control-plane node, you need to remove a taint on the master nodes.

kubectl taint nodes --all node-role.kubernetes.io/master-

The output will look something like:

node/k8smaster01.computingforgeeks.com untainted
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found

This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule pods everywhere.

Testing Pod Scheduling on Kubernetes Control plane node(s)

I have a cluster with three worker nodes and one control plane node.

$ kubectl get nodes
NAME                                STATUS   ROLES    AGE   VERSION
k8smaster01.computingforgeeks.com   Ready    master   12h   v1.17.0
k8snode01.computingforgeeks.com     Ready    <none>   12h   v1.17.0
k8snode02.computingforgeeks.com     Ready    <none>   12h   v1.17.0
k8snode03.computingforgeeks.com     Ready    <none>   9h    v1.17.0

Create a demo namespace:

kubectl create namespace demo

Will create a deployment with 5 replicas.

$ vim nginx-deployment.yaml

It has the data below:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: demo
  labels:
    app: nginx
    color: green
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
        color: green
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              protocol: TCP
              containerPort: 80
          resources:
            limits:
              cpu: "200m"
              memory: "256Mi"
            requests:
              cpu: 100m
              memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
  annotations:
  name: nginx-demo-service
  namespace: demo
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  selector:
    app: nginx
  sessionAffinity: None
  type: NodePort

Apply manifest:

$ kubectl apply -f nginx-deployment.yaml

Check if a pod is scheduled to the control node plane.

$ kubectl get pods -n demo -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP                NODE                                NOMINATED NODE   READINESS GATES
nginx-675bf5bc87-666jg   1/1     Running   0          17m   192.168.213.131   k8snode01.computingforgeeks.com                
nginx-675bf5bc87-mc6px   1/1     Running   0          17m   192.168.94.13     k8smaster01.computingforgeeks.com              
nginx-675bf5bc87-v5q87   1/1     Running   0          17m   192.168.144.129   k8snode03.computingforgeeks.com                
nginx-675bf5bc87-vctqm   1/1     Running   0          17m   192.168.101.195   k8snode02.computingforgeeks.com                
nginx-675bf5bc87-w5pmh   1/1     Running   0          17m   192.168.213.130   k8snode01.computingforgeeks.com                

We can see there is a pod in master node. Confirm service is live.

$ kubectl get svc -n demo
NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-service   NodePort   10.96.184.67   <none>        80:31098/TCP   21m

Since we’re using NodePort, we should be able to access the service on any cluster node IP on port 31098.

schedule pods on k8s control plane node

We can now clean demo objects.

$ kubectl delete -f nginx-deployment.yaml
deployment.apps "nginx" deleted
service "nginx-service" deleted

$ kubectl get pods,svc -n demo
No resources found in demo namespace.

That’s all on how to Schedule Pods on Kubernetes Control plane Node.

Course materials:

More guides:

How To Join new Kubernetes Worker Node to an existing Cluster

Deploy Kubernetes Cluster on CentOS 7 / CentOS 8 With Ansible and Calico CNI

How To Deploy Metrics Server to Kubernetes Cluster

Install and Use Helm 3 on Kubernetes Cluster

As an appreciation for the content we put out,
we would be thrilled if you support us!


As we continue to grow, we would wish to reach and impact more people who visit and take advantage of the guides we have on our blog. This is a big task for us and we are so far extremely grateful for the kind people who have shown amazing support for our work over the time we have been online.

Thank You for your support as we work to give you the best of guides and articles. Click below to buy us a coffee.

LEAVE A REPLY

Please enter your comment!
Please enter your name here