How to setup Kubernetes cluster over cloud on Ubuntu OS

How to setup Kubernetes cluster over cloud on Ubuntu OS


5 min read

Embarking on your Kubernetes cloud cluster journey for practice may seem daunting, but fear not. We're here to walk you through the process, step by step, as you set up a Kubernetes cluster on the Ubuntu OS in the cloud.

Pre-configuration for cluster nodes (Step-1 to Step-6) on each node instance

Create instances over the cloud

I am taking here AWS instances t2.medium, ubuntu OS, 20 GB storage each, Firewall security Allows ALL traffic, and swap memory disabled. I will go with three nodes cluster (1 Master + 2 Worker Node) with the same above configuration.

Instance TypeConfigurationStorage
T2.medium or higher2vC Core + 4 GB RAM or more20 GB or more

To get remotely access of cluster nodes on windows OS, you can download this "MobaXterm" application from this link :

Step - 1 (Create a non-root user and allow sudoers privileges)

We are going to create a new user admin (If there is not) and assign sudoers privileges to him. so for this we create a new folder under /etc/sudoers.d/admin, and do the below entry in /etc/sudoers.d/admin. Dont forget to restart sshd service

$ sudo adduser admin
$ sudo echo "admin    ALL=(ALL)   NOPASSWD:ALL" >> /etc/sudoers.d/admin
$ sudo systemctl restart sshd
NOTE: We'll execute each command using an admin user, not the root user, for enhanced security and best practices

Step - 2 ( Define your machine Hostname )

Change the Hostname of each machine using hostnamectll command.

Machine (Public_IP)Machine (Private_IP)Hostname
$ sudo hostnamectl set-hostname <your-hostname>
$ sudo exec bash    # execute the current shell with latest changes 
$ hostname

Step - 3 (Host entry with private IP & hostname)

$ sudo vi /etc/hosts

Note: Ping all machines to each other

Step - 4 ( Allow some entry in sshd_config file)

In /etc/ssh/sshd_config file uncomment and allow the following lines.

$ sudo vi /etc/ssh/sshd_config

PermitRootLogin yes
PubkeyAuthentication yes
PasswordAuthentication yes

And restart sshd service

$ sudo systemctl restart sshd
$ sudo systemctl enable sshd

Step - 5 (Create SSH key and copy to worker node instances)

Generate an SSH key and distribute it to your worker node instances for secure and efficient communication. This step involves creating an SSH key and seamlessly deploying it across your worker node instances to streamline the communication process.

$ ssh-keygen
$ ssh-copyid
$ ssh-copyid

Step-6 (Disable swap memory & Add kernel Parameters)

$ sudo swapoff -a
$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

NOTE: All pre-configurations is completed. So let's move to setup Kubernetes setup installation.

Step - 7 ( Run the following commands on each instance node as admin user rights )

$ sudo tee /etc/modules-load.d/containerd.conf <<EOF
$ sudo modprobe overlay
$ sudo modprobe br_netfilter

$ sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

$ sudo sysctl --system

$ sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

$ sudo curl -fsSL | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
$ sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"

$ sudo apt update
$ sudo apt install -y

$ containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
$ sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

$ sudo systemctl restart containerd
$ sudo systemctl enable containerd

Add Repository for kubernetes

curl -fsSL | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Install kubelet, kubectl, kubeadm

$ sudo apt update
$ sudo apt install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl

Step - 8 ( Run "kubeadm init" On Master Node to create clusters master node)

Initialize Kubernetes cluster with kubeadm (Only on master node).
$ sudo kubeadm init --control-plane-endpoint=<your-master-node-hostname>

NOTE: on master node with the admin user. and check status of master node

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

# To check either cluster is setup being proper setup or not, and how many nodes are connected yet.
# Also check that kubernetes resources is proper installed or not.

$ kubectl get nodes -o wide
$ kubectl get pods -n kube-system
Install CNI plugin CALICO on master node as admin user rights.
$ kubectl apply -f

Step - 9 (Run "kubeadm join" on worker nodes To join with master node in the cluster)

$ sudo kubeadm join --token vt3ua6.scma2y8rl4menfh2 \
   --discovery-token-ca-cert-hash sha256:049xaa7fcdced8a8e7b20d37ec0c5dd699ds5f8x616885697q2ff917d4c94962a36
NOTE: Your setup is being created. Congratulation !

Step - 10 (Test your kubernetes cluster to Deploy an application)

$ kubectl create -f
$ kubectl expose deployment nginx-deployment --name=my-svc --port=80 --target-port=80 --type=NodePort
$ kubectl get svc

NAME       TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
my-svc     NodePort   <none>        80:32567/TCP   2m
Check your nginx web application publically, for this find your any instances nodes public IP address and then colon 32567. For example go to web broswer like chrome or firefox

Congratulations........It works! Thank you!