k3s cluster
This section provides guidance on how to set up a high-availability K3s Kubernetes Cluster (version v1.27.2+k3s1) with NGINX for ingress, adhering to industry best practices for installation.
Host and Node Descriptions in a Kubernetes Cluster
Roles and IP addresses of various hosts in a Kubernetes cluster. It lists three master nodes and three worker nodes, each with their respective IPs. Master nodes (master-01, master-02, master-03) manage the control plane of a Kubernetes cluster, while worker nodes (worker-01, worker-02, worker-03) host the running applications. The ‘Host’ column refers to the name of the machine, ‘IP’ is its internal IP address, ‘Use For’ describes its role in the cluster, and ‘Node’ is the internal name assigned within the Kubernetes system.
Host | IP | Use For | Node |
---|---|---|---|
master-01 | 192.168.0.248 | First master node | node1 |
master-02 | 192.168.0.249 | Second master node | node2 |
master-03 | 192.168.0.250 | Second master node | node3 |
worker-01 | 192.168.0.251 | worker node | node4 |
worker-02 | 192.168.0.252 | worker node | node5 |
worker-03 | 192.168.0.253 | worker node | node6 |
Generating Secure Custom Tokens Using Python’s Secrets Module
In this Python script, we generate custom tokens using the secrets module, which is commonly used for generating
cryptographically strong random numbers suitable for managing data such as passwords, account authentication, and security
tokens. The secrets.token_hex(16)
function generates a secure random text string in hexadecimal, which is 32 characters
long (16 bytes), for use as a secure token. The example output 7082b0069b40d973c7a783a13400b3e8
illustrates the format
of the generated token.
import secrets
token = secrets.token_hex(16)
print(token)
Initializing a K3s Kubernetes Cluster on the Master Node Using a Bash Command, master-01
In this Bash command, we’re executing an operation on the first master node, labeled ‘master-01’. The command uses curl to fetch the K3s installation script from https://get.k3s.io
. The -sfL
option tells curl to silently (without progress meter or error messages), forcibly show an error message if it fails, and follow redirects if the server reports that the requested page has moved. Once fetched, the script is executed using sh
. The INSTALL_K3S_VERSION=v1.27.2+k3s1
environment variable ensures that the specified version of K3s is installed. The sh -s server --cluster-init
part initializes the server as the primary one in the cluster. The --token
flag is followed by a specific token “7082b0069b40d973c7a783a13400b3e8”, which is used for node authentication in the cluster. Finally, the --disable traefik
option disables the installation of the Traefik ingress controller, in case you plan to use another ingress controller like NGINX.
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.27.2+k3s1 sh -s server --cluster-init --token "7082b0069b40d973c7a783a13400b3e8" --disable traefik
Checking Node Status and Kube-System Namespace Pods in a Kubernetes Cluster
Check the status of the nodes and the pods within the kube-system
namespace in a Kubernetes cluster. The command kubectl get nodes
lists all nodes that can be used to host our applications. The kubectl get pod -n kube-system
command, on the other hand, fetches details of all the pods running in the kube-system
namespace, which is a namespace automatically created by Kubernetes to host pods for the Kubernetes system itself (like the Kubernetes API server, scheduler, etc.).
kubectl get nodes
kubectl get pod -n kube-system
Joining Additional Master Nodes to a K3s Kubernetes Cluster, master-02 and master-03
Join the second and third master nodes, labeled ‘master-02’ and ‘master-03’, to the K3s Kubernetes cluster. It does so by fetching the K3s installation script from https://get.k3s.io using curl, similar to the first master node setup. However, this time, the K3S_TOKEN environment variable is used to authenticate to the cluster. The sh -s server –server https://192.168.0.248:6443 part informs the node to join the cluster as a server (master) node and specifies the primary server’s address. The –disable traefik option, like before, disables the installation of the Traefik ingress controller.
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.27.2+k3s1 K3S_TOKEN="7082b0069b40d973c7a783a13400b3e8" sh -s server --server https://192.168.0.248:6443 --disable traefik
Adding Worker Nodes to a K3s Kubernetes Cluster, worker-01 worker-02 worker-03
Bash command, we’re executing an operation on each worker node (labeled ‘worker-01’, ‘worker-02’, ‘worker-03’) to join them to the K3s Kubernetes cluster. This is done by fetching the K3s installation script from https://get.k3s.io
using curl and passing the specified K3s version (INSTALL_K3S_VERSION=v1.27.2+k3s1
), token (K3S_TOKEN="7082b0069b40d973c7a783a13400b3e8"
), and server URL (K3S_URL="https://192.168.0.248:6443"
) as environment variables to the script. The script is then executed with sh -
, initiating the process of adding the worker nodes to the cluster.
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.27.2+k3s1 K3S_TOKEN="7082b0069b40d973c7a783a13400b3e8" K3S_URL="https://192.168.0.248:6443" sh -
Labeling Worker Nodes in a Kubernetes Cluster
Label the worker nodes in the Kubernetes cluster. The kubectl label nodes
command is followed by the names of the worker nodes (‘worker-01’, ‘worker-02’, ‘worker-03’) and the label ‘kubernetes.io/role=worker’. This label assigns the role of ‘worker’ to the nodes, aiding in the organization and management of the nodes within the Kubernetes cluster, especially when scheduling Pods or implementing policies.
kubectl label nodes worker-01 worker-02 worker-03 kubernetes.io/role=worker
Applying a NoSchedule Taint to Master Nodes in a Kubernetes Cluster
Set a taint on the master nodes of a Kubernetes cluster. The kubectl taint nodes
command is followed by the names of the master nodes (‘master-01’, ‘master-02’, ‘master-03’) and the taint key-value pair node-role.kubernetes.io/control-plane=:NoSchedule
. This taint prevents any new pods from being scheduled on these nodes unless they have a matching tolerance. It’s often used in multi-node Kubernetes setups to ensure that workload pods aren’t scheduled on the master nodes, reserving them for system pods.
kubectl taint nodes master-01 master-02 master-03 node-role.kubernetes.io/control-plane=:NoSchedule
Verifying NoSchedule Taint on Master Nodes in a Kubernetes Cluster
Verify if the taint NoSchedule
has been successfully applied to all master nodes in the Kubernetes cluster. The kubectl describe nodes
command fetches detailed information about all nodes, and the egrep "Taints:|Name:"
part filters the output to display only the lines containing ‘Taints:’ or ‘Name:’, making it easier to see the node names and their associated taints.
kubectl describe nodes | egrep "Taints:|Name:"
Removing Cluster Join Settings on the First Master Node in a K3s Cluster
Modify the K3s service file on the first master node to remove its cluster join settings. Specifically, it uses the sed
command to edit the file in-place (-i
) at the location /etc/systemd/system/k3s.service
. The -e '/server \\,/d'
part deletes the line starting with ‘server ', while the -e 's@ExecStart=.*@ExecStart=/usr/local/bin/k3s server@'
part replaces the entire ExecStart
line with ExecStart=/usr/local/bin/k3s server
, effectively resetting the K3s server’s start command to its default.
sed -e '/server \\/,$d' -e 's@ExecStart=.*@ExecStart=/usr/local/bin/k3s server@' -i /etc/systemd/system/k3s.service
Updating and Restarting the K3s Service in a Kubernetes Cluster
Update and restart the K3s service in a Linux environment. The systemctl daemon-reload
command reloads the systemd manager configuration, taking into account any changes made to systemd unit files (like the modification we made to /etc/systemd/system/k3s.service
in the previous step). Following this, the systemctl restart k3s
command is used to restart the K3s service, ensuring the changes are put into effect.
systemctl daemon-reload
systemctl restart k3s
Installing Helm in a K3s Kubernetes Cluster
Install Helm, a package manager for Kubernetes. The process begins with the wget
command to download the Helm package (helm-v3.11.2-linux-amd64.tar.gz
) from the official Helm GitHub repository. The downloaded tarball file is then extracted using the tar -zxvf
command. Following extraction, the Helm binary is moved to the /usr/local/bin/
directory, making it accessible system-wide. Any leftover files from the extraction and the downloaded tarball itself are then removed to clean up. The final command, cp /etc/rancher/k3s/k3s.yaml .kube/config
, copies the k3s configuration file to the default location that kubectl (Kubernetes command-line tool) looks for configuration information, ensuring that kubectl can interact with the k3s cluster.
wget https://get.helm.sh/helm-v3.11.2-linux-amd64.tar.gz
tar -zxvf helm-v3.11.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/
rm -fr linux-amd64 helm-v3.11.2-linux-amd64.tar.gz
cp /etc/rancher/k3s/k3s.yaml .kube/config
Deploying the NGINX Ingress Service in a Kubernetes Cluster Using Helm
Dploy the NGINX Ingress service in a Kubernetes cluster. The helm upgrade --install
command is used, which tries to upgrade the chart but installs it if it’s not already installed. The chart being installed is ingress-nginx
, which is fetched from the repository specified by --repo https://kubernetes.github.io/ingress-nginx
. The installation is performed in the ingress-nginx
namespace, which is created if it doesn’t already exist due to the --create-namespace
flag.
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Configuring the NGINX Ingress Controller for Forwarded Headers and User IP Logging in a Kubernetes Cluster
Create a YAML configuration file named ingress-nginx-controller.yaml
for the NGINX Ingress controller in a Kubernetes cluster. This is accomplished using a cat
command in conjunction with a ‘here document’ (<<EOF |tee ingress-nginx-controller.yaml
). The configuration file is of kind: ConfigMap
and is applied to the ingress-nginx-controller
in the ingress-nginx
namespace. The data section sets two parameters: allow-snippet-annotations
and use-forwarded-headers
, both of which are set to ’true’. The use-forwarded-headers: 'true'
configuration is especially important for logging purposes, as it instructs the Ingress controller to trust the incoming X-Forwarded-For
header, thus preserving the original user IP in the logs.
cat <<EOF |tee ingress-nginx-controller.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
allow-snippet-annotations: 'true'
use-forwarded-headers: 'true'
EOF
Applying Configuration Changes to the NGINX Ingress Controller in a Kubernetes Cluster
kubectl apply -f ingress-nginx-controller.yaml
, is used to apply the configuration specified in the ingress-nginx-controller.yaml
file to the Kubernetes cluster. This command invokes the Kubernetes command-line tool, kubectl
, to apply the configuration, creating or updating resources as defined in the YAML file. In this case, the command will create or update the ConfigMap named ‘ingress-nginx-controller’ in the ‘ingress-nginx’ namespace, setting or changing its parameters as specified in the YAML file.
kubectl apply -f ingress-nginx-controller.yaml
Checking the Status of NGINX Ingress Pods in a Kubernetes Cluster
kubectl get pods --namespace=ingress-nginx
, is used to list all the pods running in the ‘ingress-nginx’ namespace of a Kubernetes cluster. By specifying the namespace, you can narrow down the scope to only include the pods relevant to the NGINX Ingress service. The state of these pods can provide valuable insights into the health and operational status of the NGINX Ingress service within your Kubernetes cluster.
kubectl get pods --namespace=ingress-nginx
Three Basic Methods for Authorizing Kubectl to Access a Kubernetes Cluster
Three fundamental methods for authorizing the Kubernetes command-line tool, kubectl
, to access a Kubernetes cluster:
export KUBECONFIG=/path/of/config_file
- This command sets the KUBECONFIG environment variable to the path of the configuration file. Kubectl will use this file for its configuration settings.kubectl config --kubeconfig=/path/of/config_file
- This command instructs kubectl to use a specific kubeconfig file for its configuration settings.copy file to $HOME/.kube/config
- This method involves copying the configuration file to the default location ($HOME/.kube/config
) that kubectl checks for its configuration settings. If no KUBECONFIG environment variable is set, kubectl will use the file at this location.
mkdir $HOME/.kube
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
sudo chown user:group $HOME/.kube/config