• Posted on

    Managing a Kubernetes cluster with Helm and FluxCD

    As seen in my previous article, after successfully setting up a Kubernetes Cluster you can install applications on it using the Helm CLI. This is convenient to get started but might not scale well with multiple people administering the cluster.

    To make collaboration easier and avoid making mistakes by simply mistyping Helm commands, it is recommended to adopt a GitOps workflow. This consists of using Git to version control the state of the cluster. In this article, I will explain how to use FluxCD to achieve this.

    An actual helm
    Illustraton by Loik Marras

    Getting started with FluxCD

    First of all, make sure you have a running Kubernetes cluster and have installed kubectl and helm. Then you will need to create a Git repository at your favorite provider (GitHub, Bitbucket, etc…). Once this is done, clone it locally and create a releases folder inside of it. This folder will contain the files defining your Helm releases (a release is an instance of a chart running on a cluster).

    Let’s start by creating a release for my favorite IRC client, The Lounge. To do this, create a file in releases/thelounge.yaml containing the following data:

    ---
    apiVersion: helm.fluxcd.io/v1
    kind: HelmRelease
    metadata:
      name: thelounge
      namespace: default
      annotations:
        fluxcd.io/automated: "false"
        fluxcd.io/tag.chart-image: semver:~4.0
    spec:
      releaseName: thelounge
      chart:
        repository: https://halkeye.github.io/helm-charts/
        name: thelounge
        version: 4.3.0
      values:
        ingress:
          enabled: true
          hosts:
            - YOUR.OWN.DOMAIN
    

    This will be using the chart available on Artifact Hub, and the software will be running on the default namespace, and we ask for an ingress to be created. Don’t forget to commit and push the release!

    After creating this repository, you should install FluxCD on your cluster. To do this simply run the following commands:

    helm repo add fluxcd https://charts.fluxcd.io
    
    kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/crds.yaml
    
    kubectl create ns flux
    
    helm upgrade -i flux fluxcd/flux \
       --set git.url=git@github.com:[USERNAME]/[REPO] \
       --set git.branch=[BRANCH] \
       --namespace flux
    
    helm upgrade -i helm-operator fluxcd/helm-operator \
      --set git.ssh.secretName=flux-git-deploy \
      --set helm.versions=v3 \
      --namespace flux
    

    You should now see 3 pods running in the flux namespace:

    flux-memcached-869757cb88-77br8 1/1 Running 0 100s
    flux-59cb4447b9-thpr9 1/1 Running 0 101s
    helm-operator-686dc669cd-g8kn7 1/1 Running 0 89s
    

    Finally, you will need to give read and write access to your git repository to FluxCD. To do this, install fluxctl and run:

    fluxctl identity --k8s-fwd-ns flux
    

    This will print an RSA key that needs to be added as a deploy key to your repository. For GitHub users, go to https://github.com/YOURUSERNAME/YOURUSER/settings/keys/new to add the key (tick the give write access checkbox).

    Your release should start getting installed on your cluster soon. You can check the state of your helm releases by running

    kubectl get hr
    

    If needed, you can check the FluxCD logs with

    kubectl -n flux logs deployment/flux -f
    

    If you want to manually trigger FluxCD to re-read your git repository, use the command:

    fluxctl sync --k8s-fwd-ns flux
    
    A compass
    Illustraton by AbsolutVision

    Adding your charts

    Of course, you might not be able to do exactly what you want using the charts publicly available. FluxCD allows you to add your charts directly through Git.

    First, create a charts directory. Inside this directory, run the following command to create a chart named mychart:

    helm create mychart
    

    This will generate a sample chart defining everything that’s required to launch an Nginx server in production. To run this chart, you will have to define a release for it in your releases folder:

    ---
    apiVersion: helm.fluxcd.io/v1
    kind: HelmRelease
    metadata:
      name: myrelease
      namespace: default
      annotations:
        fluxcd.io/automated: "false"
        fluxcd.io/tag.chart-image: glob:3.1.1-debian-9-*
    spec:
      releaseName: myrelease
      chart:
        git: ssh://git@github.com/[USER]/[REPO]
        ref: [BRANCH]
        path: charts/mychart
      values:
        ingress:
          enabled: true
          hosts:
            - host: YOUR.OTHER.DOMAIN
              paths: ["/"]
    

    Like in the previous example, don’t forget to git commit and git push. FluxCD will automatically create the new release and you will see a magnificent Nginx welcome page appear at the specified domain.

    To be able to display something else than the Nginx welcome page, we will give it a configuration file. For this purpose, you can define a configmap in charts/mychart/templates/configmap.yaml. The following config will redirect every request to a URL of our choice:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: nginx-conf
    data:
      nginx.conf: |
        events {
          worker_connections  1024;
        }
        http {
          server {
              listen 80 default_server;
              return 301 {{ .Values.redirect }};
          }
        }
    

    You will then need to declare the volume by adding this value to spec.template.spec in charts/mychart/templates/deployment.yaml:

          volumes:
          - name: nginx-conf
            configMap:
              name: nginx-conf
    

    To mount the configuration file into the Nginx container, you need to add the following lines to spec.template.spec.containers[0] in the same file:

              volumeMounts:
              - mountPath: /etc/nginx/
                readOnly: true
                name: nginx-conf
    

    Finally, bump the version in charts/mychart/Chart.yaml, and set the redirect URL in releases/myrelease.yaml by editing the values section like this:

      values:
        ingress:
          enabled: true
          hosts:
            - host: YOUR.OTHER.DOMAIN
              paths: ["/"]
        redirect: "https://www.youtube.com/watch?v=cFVF26XPcAU"
    

    Dive deeper

    As this article only really scratches the surface of what Helm and FluxCD can do, I recommend you to also check out the official FluxCD docs and their example repository, as well as the Helm Chart Template Developer’s Guide.

    The source code for the examples shown in this article is available on GitHub.

  • Posted on

    Self-Hosted Kubernetes with k3s

    Kubernetes has been the center of attention of the DevOps community for a few years now. Yet many people find it too daunting to try it out for personal uses. Running a single-node Kubernetes cluster for self-hosting is however not that complicated. In this article, I will show you how to create this using k3s which is a lightweight distribution of Kube.

    Preparing the server

    Before launching anything Kube related, you might want to make sure the access to your server is secured. Follow this guide for instructions on how to disable root login and enable public-key authentication if it’s not already the case.

    I have encountered issues running k3s with the newer version of iptables (nftables) which comes enabled by default on many distributions (like Debian 10). You can check whether or not you’re using nftables with sudo iptables --version. You should switch back to the legacy iptables by executing:

    sudo update-alternatives --set iptables /usr/sbin/iptables-legacy 
    sudo reboot
    

    It’s also a good idea to set-up basic firewall rules to filter incoming traffic. I apply these rules for the eth0 interface which handles my server’s internet connection (find yours using ip a):

    sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT # Allow SSH
    sudo iptables -A INPUT -i eth0 -p tcp --dport 6443 -j ACCEPT # Allow kube API
    sudo iptables -A INPUT -i eth0 -p tcp --dport 10250 -j ACCEPT # Allow kube metrics
    sudo iptables -A INPUT -i eth0 -p icmp --icmp-type 8 -j ACCEPT # Allow ping
    sudo iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow responses to our requests
    sudo iptables -A INPUT -i eth0 -j DROP # Drop every other incoming packets
    

    Don’t worry about opening ports specific to the applications you want to run in your cluster since k3s will handle it directly. To persist your iptables rules after a reboot, I use the iptables-persistent package.

    sudo apt install iptables-persistent
    sudo iptables-save | sudo tee -a /etc/iptables/rules.v4
    
    Too many servers
    Illustraton by Taylor Vick

    Installing k3s

    The shell script located at https://get.k3s.io is meant to install k3s automatically. You can directly execute it by running:

    curl -sfL https://get.k3s.io | sh -
    

    You can check that the service is running by doing

    sudo service k3s status
    

    You can already run basic kubectl commands from your server’s shell. For example, we can check that everything is up and running with the cluster-info command.

    sudo k3s kubectl cluster-info
    #Kubernetes master is running at https://127.0.0.1:6443
    #CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    #Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
    #To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    

    Use kubectl and helm

    Now you will probably want to use kubectl to control your cluster without being logged in to your server. You will find the instructions on how to install kubectl on your local machine here To do this, you can find the configuration used by k3s at /etc/rancher/k3s/k3s.yaml, copy it to ~/.kube/config. Alternatively, you can pass the config to kubectl with the --kubeconfig option. You will need to change the server IP in this configuration from 127.0.0.1 to your server’s external IP address.

    Now run the kubectl cluster-info command locally and you should see results similar to what you had on your server. A few useful kubectl commands to keep in mind are:

    • kubectl get pods -> list the running pods
    • kubectl top nodes -> show the used resources on your nodes
    • kubectl logs [pod_id] -> show the stdout of a pod

    To make the installation of software easier on your cluster, helm will act as a package manager. Install helm on your local machine by following these instructions or just run this if you’re on Linux and trust their install script:

    curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
    

    Helm’s packages are called charts, they are a set of templates defining how to run specific software in Kubernetes. You can now start looking at the available charts in the Helm Hub or with the built-in command helm search hub [name].

    For example, if you want to install The Lounge (a very good self-hosted IRC client/bouncer). You will first need to add the repo that contains its chart:

    helm repo add halkeye https://halkeye.github.io/helm-charts/
    

    Running helm install will then be enough to setup Thelounge. Since you might want to access it through a specific domain, we will also ask helm to set up an ingress for it, by giving it additional options to the chart.

    This gives us the full command:

    helm install thelounge halkeye/thelounge --version 4.0.6 --set ingress.enabled=true --set ingress.hosts.0="my.domain.com"
    

    Your cluster should now be ready for any software you want to run. In a next article, I will explain how you can make better use of helm to keep track of the applications you host.

  • Posted on

    Low Tech Crypto : ThreeBallot

    Previously in this blog, I have mentioned that cryptographers designed algorithms allowing us to conceal messages without the use of a computer, like Solitaire. Although message privacy is the first subject that comes into everyone’s mind when talking about cryptography, it has many other applications, one of them being the design of secure voting systems. In this post, I will talk about ThreeBallot voting system, a system designed by Ronald Rivest (the R in RSA) which aims at creating a secure voting protocol entirely based (almost entirely) on paper ballots.

    Someone voting during french elections
    Illutration by Arnaud Jaegers

    End-to-end auditable voting systems

    Before we start taking a look at ThreeBallot, we need to summarize the objectives of cryptographic voting systems. In general, cryptographic voting systems aims at being end-to-end auditable, which means two things:

    • Anyone can proceed to a recount
    • Every voter has a way to know if their vote was counted

    This second property is usually achieved by giving a receipt of its vote to the voter. However, this receipt should not contain enough information to allow an attacker to breach the privacy of the vote, as each vote must remain secret to avoid vote-buying or coercion.

    Introducing the multi-ballot

    A sample ThreeBallot multi-ballot
    A sample ThreeBallot multi-ballot

    The ThreeBallot system aims at solving this problem with the use of a “multi-ballot”. The multi-ballot is a voting ballot that can be split into 3 parts (one per column), all containing the list of candidates (one per row) with optical-scan checkboxes as well as a unique identifier. To vote, simply check 2 boxes on the row of the candidate you want to vote for, and 1 box on the row of each candidate you want to vote against. A ballot will be considered spoilt if all three boxes for a candidate are checked, or none for another candidate.

    A sample filled ThreeBallot multi-ballot
    A sample filled ThreeBallot multi-ballot

    The validity of the ballot will need to be checked (by machine) before being split into 3 separate ballots. The voter will then chose which parts of the ballot to be used as a receipt and receive a printed copy of it, before casting the 3 ballots.

    A receipt
    A single ballot, used as receipt

    When the election is complete, every ballot is displayed publicly. Anyone can proceed to a count, the results are inflated by the number of voters (one box as to be checked minimum) but the difference between candidates will remain the same. The voters can each check that their ballot appears with the correct vote and ballot ID, or contest the vote using their receipt. The voters cannot use their receipt as a proof of having voted for a candidate, as you need at least 2 of the 3 ballots to tell that, thus maintaining vote secrecy.

    Limitations

    Despite looking very good on paper, this voting system sadly has many practical vulnerabilities. First of all, despite being paper-based it still requires a machine to print ballots with unique identifiers, and another one to check the validity of the ballots before casting them. Both of these machines represent weak points of the system, as one of them failing would stop the election from taking place correctly. A single malfunction of the checker machine would also be enough to make the entire election null because the system doesn’t provide any way of spoiling a vote after it’s cast. Privacy breaches could also happen if these machines were to be compromised, as an attacker could record which identifiers are part of the same multi-ballot and use it to connect receipts to a full ballot.

    In a follow up paper, Charlie Strauss points out that voter coercion can still happen with methods as simple as asking the voter to take a picture of their ballot using a smartphone. According to Strauss, it would also be trivial to double or triple vote by modifying the ballot after having validated it and before casting it. Voters might also give away their receipts or simply throw them in the nearest trash can, making their vote non-auditable and giving an attacker the opportunity of changing them.

    In a second paper, Strauss also points out the cryptographic weakness of ThreeBallot. Since one part of the multi-ballot restricts choice on the two other parts (a candidate can’t be entirely blank or checked), the receipt is leaking information about the actual vote. Combining this information with voter bias (ie. party preference), and the publicly displayed ballots, it becomes statistically possible to link a receipt with its two other ballot parts. This is especially concerning in a situation where the attacker could coerce the voter into not only voting for a specific candidate but also checking a specific pattern on its ballot.

    Legacy

    Many improvements to the ThreeBallot voting system have been suggested, but as with many security protocols, improving security often comes with useability issues. This led to the slow abandonment of this protocol. However, the insights gained through ThreeBallot later allowed the creations of new secure voting systems, such as Scantegrity, which was used in actual elections by the city of Takoma Park, in 2009 and 2011.

  • Posted on

    FOSDEM 2020 recap

    This year was the 20th edition of FOSDEM, and the third time I managed to attend it. FOSDEM is a yearly Free & Open Source Software meeting taking place in Brussels, Belgium and organized by the Free University of Brussels (ULB). This edition gathered more than 8000 FOSS enthusiasts from all around the world for 837 talks, making it an exceptional event for anyone interested in learning more about open-source projects and communities.

    The city of Brussels
    Illutration by Yeo Khee

    Highlights

    One of the very first talks I managed to see this year was about SpecFuzz, a tool made by Oleksii Oleksenko and his colleagues to help to test against Spectre vulnerability. The talk does an amazing job of explaining what causes the new speculative execution vulnerabilities, how can we detect them and how can we fix them. Since there’s no doubt these vulnerabilities will remain a major hardware and software security issue it’s an essential talk for anyone wanting to understand the topic. If you want to know more about SpecFuzz, Oleksii Oleksenko’s paper is available on arXiv and the source code is on GitHub.

    Then I managed to see two talks about Falco, a new Kubernetes threat detection engine. The first one by Lorenzo Fontana, explained how they managed to find a reliable way of monitoring Linux system calls. The second one by Kris Nova, shows practical use cases of k8s threat detection against real attacks.

    At the end of the first day of talks, Daniel Stenberg (curl’s developer) gave one of the most important lectures of this Fosdem, titled HTTP/3 for everyone. This talk is in my opinion largely enough to answer every question developers might have about HTTP/3 and QUIC, except maybe for “when will it release?”.

    The main ULB amphitheater

    On Sunday, Ecaterina Moraru came to talk about UI and UX. Her talks contain great and easily applicable tips for developers to build more accessible applications. I feel like this is an extremely important talk since too many open source projects focus on being technically perfect while forgetting that they need to be usable first.

    Last but not least, in the Go room, Derek Parker presented the state of Delve. For those who don’t know it already, Delve is a debugger for Go, usable from the command line or with integrations in Vim, Emacs, IntelliJ… This talk explains Delve’s support for “deterministic debugging” through Mozilla RR (Record and Replay Framework) which is a feature allowing you to capture and replay a bug until you fix it.

    Recap

    I sadly can’t make an exhaustive list of all the great talks that I might have missed at Fosdem this year. Hopefully, I will be able to see talks of the same quality next year. As usual, I thank all the Fosdem organizers and the ULB for their amazing work and allowing this event to happen for free.

  • Posted on

    Low Tech Crypto : Solitaire

    In a previous post, I talked about the One-Time Pad, as an example of cryptography that’s achievable without a computer. It’s a very simple, yet perfectly secure cryptographic algorithm, that comes with a lot of practical limitations, the biggest one being that it requires a truly random key as long as the text you want to encrypt (the plaintext). The good news is that cryptographers have been hard at work creating algorithms that are more usable while still providing privacy.

    Solitaire, an algorithm made by Bruce Schneier in 1999 is definitely the most famous of the low-tech cryptography algorithm since it was featured in Neal Stephenson’s book Cryptonomicon under the name “Pontifex”. As its name suggests, it uses a deck of cards to encrypt and decrypt messages.

    A regular card deck
    Illustraton by Aditya Chinchure

    Encrypting messages

    Solitaire is a stream cipher, which means it works just like the one-time pad, except instead of using a truly random key for the whole length of the plaintext, they generate a pseudo-random keystream from a smaller key. In this case, our base key is a shuffled deck of 54 cards, with two differentiable jokers (we will call them A and B).

    Let’s say we have the thoroughly shuffled deck of cards below and want to encrypt the message HAPPY NEW YEAR:

    49 50 51 3 4 5 6 7 1
    10 11 12 52 15 16 17 18 19
    20 21 2 9 26 27 28 29 30
    31 32 33 34 35 36 37 38 39
    40 41 42 43 44 45 23 A 14
    22 8 24 25 B 46 47 48 13

    Step 0: First we need to format the message into blocks of 5 characters (pad it with X if necessary) to avoid leaking pieces of informations. This gives us: HAPPY NEWYE ARXXX. We will then use a number representation of this plaintext.

    H A P P Y N E W Y E A R X X X
    8 1 16 16 25 14 5 23 25 5 1 18 24 24 24

    Step 1: We can then start generating the keystream, by taking the joker A and swapping it with the card beneath it. If it’s the last card put it after the first card.

    49 50 51 3 4 5 6 7 1
    10 11 12 52 15 16 17 18 19
    20 21 2 9 26 27 28 29 30
    31 32 33 34 35 36 37 38 39
    40 41 42 43 44 45 23 14 A
    22 8 24 25 B 46 47 48 13

    Step 2: You then find joker B and move it the same way two cards down.

    49 50 51 3 4 5 6 7 1
    10 11 12 52 15 16 17 18 19
    20 21 2 9 26 27 28 29 30
    31 32 33 34 35 36 37 38 39
    40 41 42 43 44 45 23 14 A
    22 8 24 25 46 47 B 48 13

    Step 3: Swap the cards above the first joker, with the cards below the second joker.

    48 13 A 22 8 24 25 46 47
    B 49 50 51 3 4 5 6 7
    1 10 11 12 52 15 16 17 18
    19 20 21 2 9 26 27 28 29
    30 31 32 33 34 35 36 37 38
    39 40 41 42 43 44 45 23 14

    Step 4: Take the value of the bottom card as a number. The simplest way to do that is to use the position of the card in the bridge suit (clubs < diamonds < hearts < spades). For example 10 of heart is 36 and a 2 of club is just 2. Jokers are both 53.

    Use this value as the index where to cut the deck for the next step. Now swap both parts of the cut, without moving the bottom card:

    4 5 6 7 1 10 11 12 52
    15 16 17 18 19 20 21 2 9
    26 27 28 29 30 31 32 33 34
    35 36 37 38 39 40 41 42 43
    44 45 23 48 13 A 22 8 24
    25 46 47 B 49 50 51 3 14

    Step 5: To find the output value, use the value of the top card as the index of the output card. If the output card is not a joker, write down its value. Start back with the deck in its current state at step 1 until you have as many numbers in your keystream as letters in your plaintext.

    In our example, we find the following keystream:

    1 34 29 35 39 46 11 12 29 50 24 26 45 6 6

    Step 7: Add your keystream to your plaintext (as numbers) modulo 26 to obtain the ciphertext:

    8 1 16 16 25 14 5 23 25 5 1 18 24 24 24
    1 34 29 35 39 46 11 12 29 50 24 26 45 6 6
    I I S Y L H P I B C Y R Q D A

    You can now safely send IISYL HPIBC YRQDA to anyone who has a deck shuffled in the same order as yours. To decrypt it, you will need to generate the same keystream and subtract it to the ciphertext you just produced.

    A regular card deck
    Illustraton by Aaron Burden

    Limitations

    Solitaire is a great low-tech cipher since it’s both practically secure and uses items that are both easy to find and unsuspicious. However, there are a few limitations to keep in mind while using this cipher:

    1. You should never use the same key twice, as it would allow an attacker to gather information about your messages without even knowing the key.
    2. You should only encrypt small messages with it. A recent analysis of Solitaire found that it leaks information at a rate of 0.0005 bits per character, while this is reasonably safe for encoding you 140 character long tweet, it isn’t safe to use for your copy of War and Peace.
    3. You still need to be careful about any extra information you could give to an attacker, especially notes you take of the keystream, or decks that you used / intent to use, as those can be used to crack your messages.

    If you want a more complete overview of Solitaire, I recommend the lecture of the original Bruce Schneier’s article on it. If code talks to you more than English, I made a small Python implementation of Solitaire that can be useful for testing purposes, you can find it on GitHub.

subscribe via RSS