I will walk you through the k3s installation and configuration steps in this article.
- Here is a video tutorial; continue reading for a list of written instructions.
Exercises to complete:
- Remove k3s from worker node
- Remove properly worker node on a master node
- Install k3s on worker node
- Label a node
- Recover deleted worker nodes on master node
- Drain node and uncordone it
- Check node log
Introduction
I decided to install k3s on a Raspberry Pi CM4 IO board with Compute Module 4 and alos on two Raspberry Pis 4b with 8GB RAM.
Remove k3s from worker node
Log into worker node via ssh, switch to root and remove k3s from worker node.
|
|
Remove properly worker node on a master node
Log into the master node and type:
|
|
to check existing nodes
Remove safely pods from worker
You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.). Safe evictions allow the pod’s containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.
|
|
Delete nodes
|
|
Check that worker node does not exist anymore
|
|
Install k3s on worker node
Get the token from master node
|
|
Get the IP address of the master node
|
|
Log into worker node via ssh and perform the below command:
|
|
Add cgroup entries into the cmdline.txt
|
|
Add at the end of the line that starts with console= the below entries:
|
|
Save the file and exit.
Reboot the server
|
|
Check worker nodes status on master node
Check node status
|
|
Label a node
How to label a node
|
|
How to change a label
|
|
Recover deleted worker nodes on master node
If someone did a drain and then deleted nodes on a master node, log into worker node via ssh and restart the k3s service with the below command
|
|
Drain node and uncordone it
|
|
Check node log
|
|