With Kubespray you get the power of Ansible and Kubeadm for the installation, configuration, and maintenance of a Kubernetes cluster. It uses a defined inventory file to identify the nodes which are part of the cluster, and to know which roles the node should play. It also provides additional configuration files that allows you to fine-tune your Kubernetes cluster settings and various cluster components. In our recent article we covered the process of Upgrading Kubespray Kubernetes Cluster to newer release.

When taking advantage of Kubespray powers, it is simply a matter of executing an ansible playbook and the desired state is manifested on the target servers. In this article we show you how to add a new node into your Kubernetes cluster using Kubespray. To get started edit your inventory file and add new node(s) that you desire to configure for use in Kubernetes cluster.

cd kubespray

Open inventory file for editing

$ vim inventory/k8scluster/inventory.ini
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
master01 ansible_host=192.168.1.10 etcd_member_name=etcd1   ansible_user=core
master02 ansible_host=192.168.1.11 etcd_member_name=etcd2   ansible_user=core
master03 ansible_host=192.168.1.12 etcd_member_name=etcd3   ansible_user=core
node01   ansible_host=192.168.1.13 etcd_member_name=        ansible_user=core
node02   ansible_host=192.168.1.14 etcd_member_name=        ansible_user=core
node03   ansible_host=192.168.1.15 etcd_member_name=        ansible_user=core
node04   ansible_host=192.168.1.16 etcd_member_name=        ansible_user=core
node05   ansible_host=192.168.1.17 etcd_member_name=        ansible_user=core
node06   ansible_host=192.168.1.18 etcd_member_name=        ansible_user=core

# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube_control_plane]
master01
master02
master03

[etcd]
master01
master02
master03

[kube_node]
node01
node02
node03
node04
node05
node06

[calico_rr]

[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr

We’ll add a new node node07

# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
master01 ansible_host=192.168.1.10 etcd_member_name=etcd1   ansible_user=core
master02 ansible_host=192.168.1.11 etcd_member_name=etcd2   ansible_user=core
master03 ansible_host=192.168.1.12 etcd_member_name=etcd3   ansible_user=core
node01   ansible_host=192.168.1.13 etcd_member_name=        ansible_user=core
node02   ansible_host=192.168.1.14 etcd_member_name=        ansible_user=core
node03   ansible_host=192.168.1.15 etcd_member_name=        ansible_user=core
node04   ansible_host=192.168.1.16 etcd_member_name=        ansible_user=core
node05   ansible_host=192.168.1.17 etcd_member_name=        ansible_user=core
node06   ansible_host=192.168.1.18 etcd_member_name=        ansible_user=core
node07   ansible_host=192.168.1.19 etcd_member_name=        ansible_user=core


# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube_control_plane]
master01
master02
master03

[etcd]
master01
master02
master03

[kube_node]
node01
node02
node03
node04
node05
node06
node07

[calico_rr]

[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr

You can see we’re adding just a normal worker node for running containerized workloads. It is placed under kube_node group. If adding a master node it should be inside kube_control_plane, then use playbook cluster.yml and not scale.yml.

When done we can use ansible playbook called scale.yml and limit execution to only new node that we’re adding.

But before using --limit we need to run playbook facts.yml without the limit to refresh facts cache for all nodes.

ansible-playbook -i inventory/k8scluster/inventory.ini  --become --become-user=root playbooks/facts.yml

Expected output:

kubespray facts

When done we can use ansible playbook called scale.yml and limit execution to only new node that we’re adding.

ansible-playbook -i inventory/k8scluster/inventory.ini  --become --become-user=root scale.yml --limit=node07

If your worker node OS is based on CoreOS, e.g Flatcar Container Linux, specify Python path in environment variable.

ansible-playbook -e 'ansible_python_interpreter=/opt/bin/python' -i inventory/k8scluster/inventory.ini  --become --become-user=root scale.yml --limit=node07

For multiple nodes you can add new group and limit based on group.

[all]
....
node07   ansible_host=192.168.1.19 etcd_member_name=        ansible_user=core
node08   ansible_host=192.168.1.20 etcd_member_name=        ansible_user=core

[kube_node]
....
node07
node08

[new_nodes]
node07
node08

# Then limit execution to new_nodes
$ ansible-playbook -i inventory/k8scluster/inventory.ini  --become --become-user=root scale.yml --limit=new_nodes

Watch out for the final tasks execution output. If successful the output will be similar to one in the screenshot below.

kubespray add new node

List nodes in your Kubernetes cluster to confirm if the new one was added.

$ kubectl get nodes
NAME       STATUS   ROLES           AGE    VERSION
master01   Ready    control-plane   237d   v1.29.2
master02   Ready    control-plane   237d   v1.29.2
master03   Ready    control-plane   230d   v1.29.2
node01     Ready    <none>          237d   v1.29.2
node02     Ready    <none>          237d   v1.29.2
node03     Ready    <none>          237d   v1.29.2
node04     Ready    <none>          230d   v1.29.2
node05     Ready    <none>          230d   v1.29.2
node06     Ready    <none>          14d    v1.29.2
node07     Ready    <none>          23m    v1.29.2

We can confirm SSH into the instance.

$ ssh [email protected]
Warning: Permanently added '192.168.1.19' (ED25519) to the list of known hosts.
Enter passphrase for key '/Users/jkmutai/.ssh/id_rsa':
Flatcar Container Linux by Kinvolk stable xxxxyy
core@node07 ~ $

You can list all the Pods running in the node using the following commands:

$ kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=node07
NAMESPACE        NAME                                 READY   STATUS            RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
kube-system      calico-node-24z2l                    1/1     Running           0          26m   192.168.1.19    node07   <none>           <none>
kube-system      kube-proxy-pghkm                     1/1     Running           0          26m   192.168.1.19    node07   <none>           <none>
kube-system      nodelocaldns-dsxkf                   1/1     Running           0          26m   192.168.1.19    node07   <none>           <none>
metallb-system   speaker-gk6ll                        1/1     Running           0          24m   192.168.1.19    node07   <none>           <none>
monitoring       node-exporter-gj6gh                  2/2     Running           0          26m   192.168.1.19    node07   <none>           <none>
rook-ceph        csi-cephfsplugin-zq664               2/2     Running           0          24m   192.168.1.19    node07   <none>           <none>
rook-ceph        csi-rbdplugin-wll2x                  2/2     Running           0          24m   192.168.1.19    node07   <none>           <none>
rook-ceph        rook-ceph-osd-prepare-node07-spm2q   0/1     PodInitializing   0          24m   10.233.87.129   node07   <none>           <none>

There you have it. In this article you’ve been able to add an new node into your existing Kubernetes cluster using Kubespray. We were able to confirm the status of the node to be Ready, and could see the output of some nodes already running.

Check more Kubernetes articles available in our website.

LEAVE A REPLY

Please enter your comment!
Please enter your name here