Posted on

Self-Hosted Kubernetes with k3s

Kubernetes has been the center of attention of the DevOps community for a few years now. Yet many people find it too daunting to try it out for personal uses. Running a single-node Kubernetes cluster for self-hosting is however not that complicated. In this article, I will show you how to create this using k3s which is a lightweight distribution of Kube.

Preparing the server

Before launching anything Kube related, you might want to make sure the access to your server is secured. Follow this guide for instructions on how to disable root login and enable public-key authentication if it’s not already the case.

I have encountered issues running k3s with the newer version of iptables (nftables) which comes enabled by default on many distributions (like Debian 10). You can check whether or not you’re using nftables with sudo iptables --version. You should switch back to the legacy iptables by executing:

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy 
sudo reboot

It’s also a good idea to set-up basic firewall rules to filter incoming traffic. I apply these rules for the eth0 interface which handles my server’s internet connection (find yours using ip a):

sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT # Allow SSH
sudo iptables -A INPUT -i eth0 -p tcp --dport 6443 -j ACCEPT # Allow kube API
sudo iptables -A INPUT -i eth0 -p tcp --dport 10250 -j ACCEPT # Allow kube metrics
sudo iptables -A INPUT -i eth0 -p icmp --icmp-type 8 -j ACCEPT # Allow ping
sudo iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow responses to our requests
sudo iptables -A INPUT -i eth0 -j DROP # Drop every other incoming packets

Don’t worry about opening ports specific to the applications you want to run in your cluster since k3s will handle it directly. To persist your iptables rules after a reboot, I use the iptables-persistent package.

sudo apt install iptables-persistent
sudo iptables-save | sudo tee -a /etc/iptables/rules.v4
Too many servers
Illustraton by Taylor Vick

Installing k3s

The shell script located at https://get.k3s.io is meant to install k3s automatically. You can directly execute it by running:

curl -sfL https://get.k3s.io | sh -

You can check that the service is running by doing

sudo service k3s status

You can already run basic kubectl commands from your server’s shell. We can check that everything is up and running with the cluster-info command.

sudo k3s kubectl cluster-info
#Kubernetes master is running at https://127.0.0.1:6443
#CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
#Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
#To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Use kubectl and helm

Now you will probably want to use kubectl to control your cluster without being logged in to your server. You will find the instructions on how to install kubectl on your local machine here To do this, you can find the configuration used by k3s at /etc/rancher/k3s/k3s.yaml, copy it to ~/.kube/config. Alternatively, you can pass the config to kubectl with the --kubeconfig option. You will need to change the server IP in this configuration from 127.0.0.1 to your server’s external IP address.

Now run the kubectl cluster-info command locally and you should see results similar to what you had on your server. A few useful kubectl commands to keep in mind are:

  • kubectl get pods -> list the running pods
  • kubectl top nodes -> show the used resources on your nodes
  • kubectl logs [pod_id] -> show the stdout of a pod

To make the installation of software easier on your cluster, helm will act as a package manager. Install helm on your local machine by following these instructions or just run this if you’re on Linux and trust their install script:

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Helm’s packages are called charts, they are a set of templates defining how to run specific software in Kubernetes. You can now start looking at the available charts in the Helm Hub or with the built-in command helm search hub [name]. For example, if you want to install The Lounge (a very good self-hosted IRC client/bouncer). You will first need to add the repo that contains its chart:

helm repo add halkeye https://halkeye.github.io/helm-charts/

Running helm install will then be enough to setup Thelounge. Since you might want to access it through a specific domain, we will also ask helm to set up an ingress for it, by giving it additional options to the chart. This gives us the full command:

helm install thelounge halkeye/thelounge --version 4.0.6 --set ingress.enabled=true --set ingress.hosts.0="my.domain.com"

Your cluster should now be ready for any software you want to run. In a next article, I will explain how you can make better use of helm to keep track of the applications you host.