This page looks best with JavaScript enabled

How to uninstall k3s and install it on worker nodes

 ·  ☕ 2 min read  ·  🐧 sysadmin

I will walk you through the k3s installation and configuration steps in this article.

  1. Here is a video tutorial; continue reading for a list of written instructions.

Exercises to complete:

  1. Remove k3s from worker node
  2. Remove properly worker node on a master node
  3. Install k3s on worker node
  4. Label a node
  5. Recover deleted worker nodes on master node
  6. Drain node and uncordone it
  7. Check node log
Introduction

I decided to install k3s on a Raspberry Pi CM4 IO board with Compute Module 4 and alos on two Raspberry Pis 4b with 8GB RAM.

Remove k3s from worker node

Log into worker node via ssh, switch to root and remove k3s from worker node.

1
2
3
4
sudo -i 
cd /usr/local/bin
./k3s-killall.sh
./k3s-agent-uninstall.sh
Remove properly worker node on a master node

Log into the master node and type:

1
kubectl get nodes

to check existing nodes

Remove safely pods from worker

You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.). Safe evictions allow the pod’s containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.

1
kubectl drain worker

Delete nodes

1
kubectl delete node worker

Check that worker node does not exist anymore

1
kubectl get nodes
Install k3s on worker node

Get the token from master node

1
sudo cat /var/lib/rancher/k3s/server/node-token

Get the IP address of the master node

1
hostname -I

Log into worker node via ssh and perform the below command:

1
curl -sfL https://get.k3s.io | K3S_URL=https://10.10.0.110:6443 K3S_TOKEN=K1035b82... sh -

Add cgroup entries into the cmdline.txt

1
sudo vim /boot/cmdline.txt

Add at the end of the line that starts with console= the below entries:

1
cgroup_memory=1 cgroup_enable=memory

Save the file and exit.

Reboot the server

1
sudo reboot

Check worker nodes status on master node

Check node status

1
kubectl get nodes
Label a node

How to label a node

1
kubectl label nodes worker kubernetes.io/role=worker

How to change a label

1
kubectl label nodes worker kubernetes.io/role=worker-1 --overwrite
Recover deleted worker nodes on master node

If someone did a drain and then deleted nodes on a master node, log into worker node via ssh and restart the k3s service with the below command

1
sudo systemctl restart k3s-agent.service
Drain node and uncordone it
1
2
3
4
kubectl drain worker
kubectl get nodes
kubectl uncordon worker
kubectl get nodes
Check node log
1
kubectl describe node worker
Share on

sysadmin
WRITTEN BY
sysadmin
QA & Linux Specialist