Cloud Resource
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

k8s cluster

Discover the world of Kubernetes with our easy-to-follow guide on setting up a lightweight k8s cluster. Ideal for beginners, this tutorial will provide you with practical, hands-on knowledge, step by step. Dive into the heart of container orchestration and unleash the power of Kubernetes for your applications. Learn how to automate, scale, and manage your workloads efficiently, enhancing your skills in modern DevOps practices. Whether you are an IT professional aiming to boost your career or a hobbyist looking for a deep dive into the world of containerization, this guide offers a comprehensive learning path. Master Kubernetes and transform the way you handle software deployments and operations.

Creating DNS ‘A’ Records for Multiple IP Addresses

First, create DNS A records for three different IP addresses. The A record maps a domain name to the IP address (IPv4) of the computer hosting the domain. In this case, the domain name k8s-master.yourdomain.com is mapped to three different IP addresses (192.168.0.151, 192.168.0.152, and 192.168.0.153). This configuration would typically be used to set up a load balancer or a highly available service. By doing this, any requests to k8s-master.yourdomain.com will be distributed among the three IP addresses listed.

DNS IN A k8s-master.yourdomain.com
    192.168.0.248
    192.168.0.249
    192.168.0.250

Host and Node Descriptions in a Kubernetes Cluster

Roles and IP addresses of various hosts in a Kubernetes cluster. It lists three master nodes and three worker nodes, each with their respective IPs. Master nodes (k8s-master-01, k8s-master-02, k8s-master-03) manage the control plane of a Kubernetes cluster, while worker nodes (k8s-worker-01, k8s-worker-02, k8s-worker-03) host the running applications. The Host column refers to the name of the machine, IP is its internal IP address, Use For describes its role in the cluster, and Node is the internal name assigned within the Kubernetes system.

Host IP Use For Node
k8s-master-01 192.168.0.248 First master node node1
k8s-master-02 192.168.0.249 Second master node node2
k8s-master-03 192.168.0.250 Second master node node3
k8s-worker-01 192.168.0.251 worker node node4
k8s-worker-02 192.168.0.252 worker node node5
k8s-worker-03 192.168.0.253 worker node node6

Setting the Hostname for Each Node in a Kubernetes Cluster

Set hostname for each host in a Kubernetes cluster. The hostnamectl command is used with the set-hostname option to define the static hostname for each node in the system. In this case, each of the three master nodes and three worker nodes are given distinct hostnames under the yourdomain.com domain. These hostnames uniquely identify each node in the cluster and provide a way to reference them individually.

sudo hostnamectl set-hostname k8s-master-01.yourdomain.com --static
sudo hostnamectl set-hostname k8s-master-02.yourdomain.com --static
sudo hostnamectl set-hostname k8s-master-03.yourdomain.com --static
sudo hostnamectl set-hostname k8s-worker-01.yourdomain.com --static
sudo hostnamectl set-hostname k8s-worker-02.yourdomain.com --static
sudo hostnamectl set-hostname k8s-worker-03.yourdomain.com --static

Adding Host Records to the /etc/hosts File for a Kubernetes Cluste

Adding host records to the /etc/hosts file on a Unix-like operating system. The /etc/hosts file maps hostnames to IP addresses. In this case, both the fully qualified domain names (FQDNs) and the short names of the nodes in the Kubernetes cluster are mapped to their respective IP addresses. Additionally, some lines are added to accommodate IPv6 capable hosts. The cat command, combined with sudo tee, is used to write these mappings directly to the /etc/hosts file.

cat <<EOF |sudo tee /etc/hosts
127.0.0.1 localhost
192.168.0.248 k8s-master-01.yourdomain.com k8s-master-01
192.168.0.249 k8s-master-02.yourdomain.com k8s-master-02
192.168.0.250 k8s-master-03.yourdomain.com k8s-master-03
192.168.0.251 k8s-worker-01.yourdomain.com k8s-worker-01
192.168.0.252 k8s-worker-02.yourdomain.com k8s-worker-02
192.168.0.253 k8s-worker-03.yourdomain.com k8s-worker-03

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
EOF

Updating the Package List and Installing Essential Packages on a Debian-based System

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

Disabling Swap Space on a Unix-like Operating System for Kubernetes Setup

sudo swapoff -a
sudo sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
sudo rm /swap.img

Creating a Configuration File for Loading Necessary Kernel Modules for Kubernetes

In this step, we create a configuration file, containerd.conf, in the directory /etc/modules-load.d/. This file ensures that the overlay and br_netfilter kernel modules are loaded at boot time.

The overlay module is responsible for providing the overlay filesystem, which Docker and other container runtimes use for layering images.

On the other hand, br_netfilter helps with network packet filtering and is essential for certain network operations in Kubernetes. It ensures that packets traversing the bridge are processed by iptables for filtering and for NAT.

sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

Manually Loading the Necessary Kernel Modules for Kubernetes

sudo modprobe overlay
sudo modprobe br_netfilter

Configuring Kernel Parameters for Kubernetes with sysctl

Create a new sysctl configuration file named kubernetes.conf in the /etc/sysctl.d/ directory. This file is used to configure kernel parameters at runtime.

We are setting three specific parameters:

  1. net.bridge.bridge-nf-call-ip6tables = 1: This ensures that the packets traversing the network bridge are processed by ip6tables for filtering and NAT, which is essential for the IPv6 functionality of Kubernetes.

  2. net.bridge.bridge-nf-call-iptables = 1: Similarly, this setting makes sure that the packets traversing the network bridge are processed by iptables for filtering and NAT, which is critical for the IPv4 functionality of Kubernetes.

  3. net.ipv4.ip_forward = 1: This enables forwarding of IPv4 packets by the kernel, which is crucial for the pod-to-pod and pod-to-service communication in a Kubernetes cluster.

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Applying System-Wide Kernel Parameter Configuration with sysctl

sudo sysctl --system

Securely Installing Docker and Essential Packages on Ubuntu

This series of commands is designed to install Docker and necessary dependencies on an Ubuntu system. The initial step involves downloading Docker’s official GPG key and storing it to the system’s trusted keys, ensuring secure communication with Docker’s repositories. After that, the Docker repository is added to the list of apt sources. We then update the apt package index and install required packages including curl, gnupg2, software-properties-common, apt-transport-https, ca-certificates, and containerd.io, which is a flexible container runtime.

sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates containerd.io

Configuring and Managing the containerd Service

This procedure lays out the necessary steps to configure and ensure containerd, a vital component of containerd, operates seamlessly. We start by generating a default containerd configuration and storing it in the /etc/containerd/config.toml file. We then modify this configuration file with the sed command to set the SystemdCgroup parameter to true, enabling cgroup management through systemd, which offers better integration with the system. Lastly, we restart the containerd service to apply the new configuration changes and then enable it to automatically start at system boot, ensuring its continued operation.

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

Installing and Setting up Essential Kubernetes Components

Install essential Kubernetes components on our system. Firstly, we obtain the official Google Cloud package signing key using curl and add it to our system’s APT keyring, ensuring the integrity of the packages we download. We then add the Kubernetes APT repository to our system’s software sources, allowing us to install Kubernetes directly from the official project repositories. Upon updating our system’s package list, we proceed to install three crucial Kubernetes binaries: kubelet (the base node agent), kubeadm (for cluster management), and kubectl (the command-line tool for interacting with the cluster). Lastly, we use the apt-mark command to hold these installed packages back from being automatically upgraded, to maintain version compatibility and stability in our Kubernetes cluster setup.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Initializing the Kubernetes Master Node and Configuring the Pod Network

To initiate the creation of our Kubernetes cluster, we pull all necessary images using kubeadm config images pull. This ensures all required container images for the control plane components are present locally on the system. Following this, we execute kubeadm init to start the Kubernetes master. The --control-plane-endpoint option specifies the shared endpoint for all control-plane nodes in our cluster, which is our DNS name k8s-master.yourdomain.com on port 6443. The --upload-certs flag is used to upload control-plane certificates to the cluster, which can be downloaded later for joining other control-plane nodes. Lastly, we specify the --pod-network-cidr as 172.16.0.0/16, which sets the range of IP addresses for the pod network. If the pod network plugin you are using does not support this feature, this flag can be omitted.

sudo kubeadm config images pull
sudo kubeadm init --control-plane-endpoint "k8s-master.yourdomain.com:6443" --upload-certs --pod-network-cidr=172.16.0.0/16

Post-Initialization Steps and Expanding Your Kubernetes Cluster

Following the successful initialization of the Kubernetes control plane, you will see a confirmation message providing further instructions on how to start using your newly created cluster. As a regular user, you need to create a new directory at $HOME/.kube and copy the admin configuration file admin.conf from /etc/kubernetes to this new directory. This file contains the information needed for kubectl to communicate with your cluster. You also need to adjust the permissions so that the current user owns the config file in the .kube directory.

If you are logged in as the root user, you can simply set the KUBECONFIG environment variable to point directly to the admin.conf file.

Next, a pod network needs to be deployed to the cluster. This can be done using kubectl apply -f [podnetwork].yaml, where [podnetwork] is replaced with the appropriate choice for your setup. You can find available options at the Kubernetes documentation under cluster administration addons.

To add additional control-plane nodes to the cluster, the kubeadm join command should be executed on those machines, with the specified parameters for server address, token, and certificates. Please note that the certificate key grants access to sensitive data within the cluster, hence it should be kept confidential. As an added security measure, the uploaded certificates will be automatically deleted after two hours. If necessary, these can be reloaded by using the kubeadm init phase upload-certs --upload-certs command.

Adding worker nodes to the cluster involves a similar kubeadm join command that is executed on each worker node.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s-master.yourdomain.com:6443 --token 0sea67.k0epqz8cvdttteyd \
	--discovery-token-ca-cert-hash sha256:b143b6717c01c39adf82c767e86da0b698779d9fc3b21314160edd019def8a7e \
	--control-plane --certificate-key 199077bc433dc976d27223eabd1117db2ffb567c228a87d4e0b5afc813e3659f

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master.yourdomain.com:6443 --token 0sea67.k0epqz8cvdttteyd \
	--discovery-token-ca-cert-hash sha256:b143b6717c01c39adf82c767e86da0b698779d9fc3b21314160edd019def8a7e 

Setting Up Network Communication in Your Kubernetes Cluster with Project Calico

After initializing your Kubernetes cluster, the next step is to establish a networking solution. In this case, we’re using Project Calico, an open-source networking and network security solution for containers. The setup involves deploying the Calico operator to your cluster. This operator is responsible for managing Calico’s custom resource definitions (CRDs) on the Kubernetes API server, allowing you to configure and manage Calico features.

Furthermore, it’s essential to customize your networking settings to match your environment. For this, you’ll need to download and edit the custom-resources.yaml file provided by Project Calico. In this file, you’ll find the network settings, where you can change the cidr field to match your network’s CIDR block (in this case, 172.16.0.0/16).

Finally, you’ll apply the custom-resources.yaml file to your cluster, which tells the Calico operator to install Calico with your customized networking settings. The operator ensures the right services and configurations are put in place within your cluster to enable network communication between your pods.

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/master/manifests/tigera-operator.yaml
curl https://raw.githubusercontent.com/projectcalico/calico/master/manifests/custom-resources.yaml -O
kubectl create -f custom-resources.yaml

Installing Helm in a K3s Kubernetes Cluster

Install Helm, a package manager for Kubernetes. The process begins with the wget command to download the Helm package (helm-v3.11.2-linux-amd64.tar.gz) from the official Helm GitHub repository. The downloaded tarball file is then extracted using the tar -zxvf command. Following extraction, the Helm binary is moved to the /usr/local/bin/ directory, making it accessible system-wide. Any leftover files from the extraction and the downloaded tarball itself are then removed to clean up.

wget https://get.helm.sh/helm-v3.11.2-linux-amd64.tar.gz
tar -zxvf helm-v3.11.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/
rm -fr linux-amd64 helm-v3.11.2-linux-amd64.tar.gz

Deploying the NGINX Ingress Service in a Kubernetes Cluster Using Helm

Dploy the NGINX Ingress service in a Kubernetes cluster. The helm upgrade --install command is used, which tries to upgrade the chart but installs it if it’s not already installed. The chart being installed is ingress-nginx, which is fetched from the repository specified by --repo https://kubernetes.github.io/ingress-nginx. The installation is performed in the ingress-nginx namespace, which is created if it doesn’t already exist due to the --create-namespace flag.

helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace

Configuring the NGINX Ingress Controller for Forwarded Headers and User IP Logging in a Kubernetes Cluster

Create a YAML configuration file named ingress-nginx-controller.yaml for the NGINX Ingress controller in a Kubernetes cluster. This is accomplished using a cat command in conjunction with a ‘here document’ (<<EOF |tee ingress-nginx-controller.yaml). The configuration file is of kind: ConfigMap and is applied to the ingress-nginx-controller in the ingress-nginx namespace. The data section sets two parameters: allow-snippet-annotations and use-forwarded-headers, both of which are set to ’true’. The use-forwarded-headers: 'true' configuration is especially important for logging purposes, as it instructs the Ingress controller to trust the incoming X-Forwarded-For header, thus preserving the original user IP in the logs.

cat <<EOF |tee ingress-nginx-controller.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: 'true'
  use-forwarded-headers: 'true'
EOF

Applying Configuration Changes to the NGINX Ingress Controller in a Kubernetes Cluster

kubectl apply -f ingress-nginx-controller.yaml, is used to apply the configuration specified in the ingress-nginx-controller.yaml file to the Kubernetes cluster. This command invokes the Kubernetes command-line tool, kubectl, to apply the configuration, creating or updating resources as defined in the YAML file. In this case, the command will create or update the ConfigMap named ‘ingress-nginx-controller’ in the ‘ingress-nginx’ namespace, setting or changing its parameters as specified in the YAML file.

kubectl apply -f ingress-nginx-controller.yaml

Checking the Status of NGINX Ingress Pods in a Kubernetes Cluster

kubectl get pods --namespace=ingress-nginx, is used to list all the pods running in the ‘ingress-nginx’ namespace of a Kubernetes cluster. By specifying the namespace, you can narrow down the scope to only include the pods relevant to the NGINX Ingress service. The state of these pods can provide valuable insights into the health and operational status of the NGINX Ingress service within your Kubernetes cluster.

kubectl get pods --namespace=ingress-nginx

Successfully Deploying Your First Kubernetes Cluster: What’s Next?

Extended Sentence: Congratulations, you’ve successfully deployed your Kubernetes (k8s) cluster! This accomplishment is a significant step forward in your journey into the world of Kubernetes, an open-source platform designed to automate deploying, scaling, and operating application containers.

With your new cluster, you now have a powerful tool for managing your applications, whether they’re based on microservices or legacy systems. Kubernetes not only simplifies deployment processes but also helps in managing the complexity of running and scaling distributed systems.

Remember, while deploying the cluster is an achievement on its own, your journey doesn’t stop here. Kubernetes is a vast ecosystem, filled with various resources, tools, and extensions you can leverage to make the most out of your cluster.

From here, consider delving into topics like Kubernetes Operators, Helm Charts, and Kubernetes Service Meshes. Take time to learn about the security best practices to protect your cluster, and don’t forget about monitoring and logging to keep your applications healthy and to troubleshoot any issues that may arise.

As you continue to learn and experiment, you’ll find Kubernetes to be a versatile, powerful platform that can truly revolutionize the way you handle your workloads.