• Posted on

    DEFCON 30

    This summer I had the opportunity to attend DEFCON 30, a cybersecurity conference gathering around 27000 hackers in the fabulous city of Las Vegas, Nevada. With more than 30 villages and 3 main conference tracks, the event managed to cover pretty much every subject from malware analysis to online drug dealing.

    The welcome to Las Vegas sign
    Photo by Grant Cai

    Best talks

    Roger Dingledine from the Tor Project made a fairly news-relevant talk explaining how Russia is trying to block Tor. It gives an explanation of the software produced by the Tor project, such as Tor, Tor browser, and pluggable transports (like meek). Those last ones are the most important here since they can help bypass attempts made by dictatorships to block Tor. The talk dives a bit deeper into Russia’s censorship of Tor and explains its numerous flaws and shortcomings.

    Another very interesting talk was from Nikita Kurtin, about bypassing Android permissions. This talk shows perfectly how thinking outside the box can lead you to completely break complex permission systems. In this case, he uses a mix of UX and system tricks to get users to agree to anything, all the time.

    And lastly, Minh Duong gave the most fun talk of this conference by explaining how he Rick Roll’d his entire school district. It explains how he managed to take over his school network, using known vulnerabilities and software misconfigurations, and progressively escalated his position until he was able to play “Never gonna give you up” everywhere. Definitely, a good example of realistic hacking, far away from academic papers and armchair exploit development.

    The villages

    Each village provided either a set of talks, and activities. I didn’t stick too long in the Cloud and AppSec villages, as I wanted to use the conference to also discover subjects I am less used to. The physical security, tamper-evident, and lockpicking villages were particularly interesting to me, as I had not really explored non-computer topics of security before. And honestly, they almost made me think picking locks was going to be easy!

    The car hacking and voting machine villages also allowed me to have a glimpse into topics that will probably become quite important to the industry in the near future. The biohacking village was also interesting as it provided a few medical devices to try and break, although I am not sure if anyone managed to actually root anything during the conference.

    The other stuff

    At night, the talks and villages left room for parties. Not only this made for a good socializing opportunity, but we also managed to see an absolutely awesome show by Taiko Project.

    I didn’t really take the time to solve the badge challenges, but I still found it very cool that it contains an actual playable keyboard.

    The welcome to Las Vegas sign

    And I was almost going to forget but, Vegas was strange, but also actually a nice city. I don’t think I would mind having to face the desert heat once more if I have the occasion.

  • Posted on

    Managing your Kubernetes cluster using Helm and Terraform

    In a previous post, I explained how to manage a Kubernetes cluster with FluxCD. This showed a way to implement a GitOps workflow, which means using a Git repository as the source of truth for the state of your cluster.

    Flux introduces multiple new objects in your Kubernetes clusters, and requires using custom software. This makes it harder to adopt by smaller teams without dedicated platform engineers. On the other hand, most teams are already using Terraform. In this article, I will show how to make use of Terraform to manage your kubernetes clusters. This way could be considered GitOps-lite because while changes are kept track of in a Git repository, they are neither enforced nor automatically pulled from the repository.

    I will use resources from Google Cloud Platform in the following examples, but everything in this article should be doable in any major cloud platform.

    A nice earthly forest
    Photo by Geran de Klerk

    Setting up a cluster with Terraform

    First, make sure you have the Terraform CLI installed or download it here. Then, create a new Git repository

    mkdir cluster
    cd cluster
    git init
    git remote add origin <your-github-repo>
    

    We will then define a basic Google Kubernetes Engine cluster (with two nodes) and indicate Terraform to store the state in a remote Google Cloud Storage bucket, which you will need to create manually. The following HCL should go in the main.tf file:

    terraform {
      required_providers {
        google = {
          source  = "hashicorp/google"
          version = "3.5.0"
        }
      }
      backend "gcs" {
        bucket = "your-unique-tf-state-bucket"
        prefix = "terraform/state"
      }
    }
    
    variable "project_id" {
      description = "project id"
    }
    
    variable "region" {
      description = "region"
    }
    
    provider "google" {
      project = var.project_id
      region  = var.region
    }
    
    resource "google_container_cluster" "primary" {
      name               = "${var.project_id}-gke"
      location           = var.region
      initial_node_count = 2
    }
    

    This defines the variables “region” and “project_id”, which we pass to the google provider, as those will contain the data relevant to our GCP project. The values of these variables can either be set interactively when running terraform commands or kept in the terraform.tfvars file.

    You can then proceed to run terraform init to initialize your project and state. After that, run terraform plan, which will display the changes terraform is about to do, and run terraform apply to apply those changes and create your cluster.

    Since GKE resources have a state that is a lot more complex than the HCL file above, I suggest using the output of terraform state show google_container_cluster.primary to refactor the HCL into a more complete overview of the state. This can in the future avoid unwanted changes to appear in the plans.

    Authenticating to your cluster

    You can now authenticate to your cluster using the Google Cloud CLI. First install it by following the instructions here. Then log into Google Cloud using gcloud auth login, which should open your browser for a login prompt. After that, use gcloud config set project to indicate which project you are working on. Finally, run gcloud container clusters get-credentials [YOUR_PROJECT_ID]-gke --zone=[YOUR_REGION]

    You should now see your cluster appear when you run kubectl config get-contexts and two nodes should be visible by running kubectl get nodes.

    A helm
    Photo by Frank Eiffert

    Installing Helm charts

    We will now use the helm terraform provider to install podinfo and set the value replicaCount to 2. To do this, create a new HCL file, with the following content:

    provider "helm" {
      kubernetes {
        config_path = "~/.kube/config"
      }
    }
    
    resource "helm_release" "podinfo" {
      name       = "podinfo"
    
      repository = "https://stefanprodan.github.io/podinfo"
      chart      = "podinfo"
    
      set {
        name = "replicaCount"
        value = 2
      }
    }
    

    You’ll need to run terraform init -upgrade to install this new provider. Then you can run terraform plan and terraform apply, just like in the previous section.

    This will act just like helm upgrade -install and the podinfo release should now appear when running helm list. Using kubectl get pods should show you two podinfo pods running. You can access the service by running kubeclt port-forward service/podinfo 9898:9898 and then curl localhost:9898.

    Going further

    Directly applying changes from your local environment doesn’t scale well when multiple people are committing changes to the infrastructure. It also causes safety issues, as changes could be applied locally without having been through a PR review. One way to solve that problem would be to use GitHub Actions to automate planning the changes when a PR is applied and applying when a PR is merged. The SaaS Atlantis can also help solve that problem, as it will act as a GitHub bot that will plan/apply in response to comments.

  • Posted on

    A sensible NeoVim configuration

    I have been using NeoVim as my main editor for code since 2017. When discussing that with other engineers, a common complaint I hear about (Neo)Vim is that its configuration is overly complicated and confusing. In this post, I will try to address this point by showing simple steps to configure NeoVim into a development capable environment. This should allow you to get essential code editing features (ie. auto-completion, linting, search…) while keeping a minimalist and fast environment.

    Installing NeoVim

    First of all, install NeoVim by following the instructions on GitHub. Open NeoVim by simply typing nvim in a terminal.

    If you want to learn the basics of using vim, type :Tutor. This will take you to the built-in tutorial.

    Type the :CheckHealth command into NeoVim’s normal mode, this will tell you if you need to set up anything special.

    The most important dependency is to have python 3.6+ with the neovim pip package installed as well as node with the neovim npm package. Aside from that, you might need:

    • python 2.7 with the neovim package
    • ruby with the neovim gem
    • perl with the Neovim::Ext module

    Once everything is green, let’s go over some basic configurations

    An editing tool as technology advanced as vim
    Photo by Kenny Eliason

    Basic configuration

    First, you should create your configuration file at ~/.config/nvim/init.vim.

    To be able to exit NeoVim’s terminal emulator (:term) by simply pressing escape instead of the Ctrl+\ Ctrl+n combo, add this line to your config file (~/.config/nvim/init.vim):

    tmap <Esc> <C-\><C-n>
    

    map is the basic operator for mapping keyboard shortcuts. tmap indicates that you want to map these keys in terminal mode only. imap and nmap are respectively for insert and normal mode. As an alternative to map you might have to sometimes use noremap to non-recursively map (this is meant to avoid propagating to mappings you’ve already changed).

    You’ll also want to be able to switch between windows without having to type a complete command:

    nmap <silent> k :wincmd k<CR>
    nmap <silent> j :wincmd j<CR>
    nmap <silent> h :wincmd h<CR>
    nmap <silent> l :wincmd l<CR>
    

    These lines will map the switching of commands to Alt+hjkl. (Note, on mac the only way I’ve managed to configure that was by using the characters output when I press Alt+hjkl on my keyboard).

    Then, you can do a few tweaks to the appearance. I recommend displaying numbers next to lines, displaying line wrapping, and allowing horizontal scrolling.

    set number
    set nowrap
    set sidescroll=1
    

    You should then add some default tab configurations.

    set tabstop=2
    set softtabstop=2
    set shiftwidth=2
    set expandtab
    set autoindent
    set fileformat=unix
    

    You also want to be able to override those configurations for each project with a local .nvimrc:

    set exrc
    

    And lastly, this line will allow scripts to run for specific file types:

    :filetype plugin on
    

    Plugins

    Since NeoVim doesn’t have every feature developers usually need out of the box, you will need to be able to install plugins. For that purpose, we will use vim-plug. Installing by runnning:

    sh -c 'curl -fLo "${XDG_DATA_HOME:-$HOME/.local/share}"/nvim/site/autoload/plug.vim --create-dirs \
           https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim'
    

    We will then do the first edit to your configuration file ~/.config/nvim/init.vim. Add the following lines to install the plugins you will need:

    call plug#begin('~/.vim/plugged')
    Plug 'ctrlpvim/ctrlp.vim'
    Plug 'tpope/vim-fugitive'
    Plug 'vim-airline/vim-airline'
    Plug 'tpope/vim-sleuth'
    Plug 'neoclide/coc.nvim', {'branch':'release'}
    call plug#end()
    

    Each of the lines starting with Plug indicates the GitHub repository of a plugin. Save your config file and type :PlugInstall in NeoVim’s normal mode to install them.

    The plugins this will have installed are the following:

    • coc.nvim gives you essential code-editing features like auto-completion using the language server protocol (like VScode)

    Ctrlp.vim

    Ctrlp.vim is a very efficient file search plugin.

    To use it simply press CTRL+P, this will open a file list that you can navigate with arrow keys. Typing any characters will make it search for matching files. You can use CTRL-F and CTRL-B to cycle between the different search modes (files, buffers, or most recently used). CTRL-R will allow you to type in regexes, while CTRL-D will search only in the filenames instead of the full path.

    The search function

    Use the following line to stop this plugin from indexing files outside of your source code:

    let g:ctrlp_custom_ignore = '\v[\/](node_modules|target|dist|venv)|(\.(swp|ico|git|svn))$'
    

    If you want to integrate more file handling into vim, you can also try NERDTree, which will provide you with a complete file system explorer.

    Vim-fugitive

    Vim-fugitive is a wrapper around git. It doesn’t require any special configuration to work, as long as you have a git CLI configured on your machine.

    It will allow you to use any git command in vim using :G, you can try it with :G status to display the result of git status. One neat trick is that you can use :G grep to send the result of a git grep directly into vim’s quickfix list.

    Vim-airline

    Vim-airline will display the status of your current file as the last line of the editor. It’s basically plug and play and will even automatically detect other plugins you use (like ctrpl or fugitive) and integrate them.

    The status line is divided into 6 sections (A, B, C, X, Y, Z) and can be customized with the statusline syntax:

    let g:airline_section_b = 'Filetype: %y'
    
    The resulting airling

    Vim-sleuth

    Vim-sleuth aims at automatically adjusting indentation to the file you’re editing. It works by simply overriding the shiftwidth and expandtab variables (which you can set manually in your config file if you want). It doesn’t require any configuration either and generally works pretty well.

    An alternative to it would be to use editorconfig-vim. This plugin will instead follow the configuration written in a .editorconfig file at the root of a project.

    Coc.nvim

    Coc.nvim is really what will give you an IDE-like experience. This plugin will give you auto-complection, linting, and formatting, using language servers protocols. For that part, I recommend using the configuration sample they provide on GitHub. It will give you useful shortcuts to IDE-like features, like gd to go to a definition, gr to list references, and \t for automatic renaming.

    The rename window

    To support every possible language, CoC has its own plugin system. For example, :CocInstall coc-go will install the coc-go plugin for the Go programming language and use gopls.

    Alternatively, NeoVim now has built-in support for LSPs, you can read more about how to configure that in this repo. As this solution still requires to install a few plugins for auto-completion, I consider CoC to be the simplest option around at the moment.

    Theming

    Of course, everyone loves some colors! Vim and Neovim have a ton of themes that you can example at vimcolorscheme. Each one can be installed with vim-plug like any other plugin. Once it is installed you can use the colorscheme function to enable it from your vim config file. For example with the dracula theme:

    colorscheme dracula
    

    Conclusion

    This post is only scratching the surface of what can be done with NeoVim. As you use this editor more and more, you will probably want to use different configurations, and will probably end up with a few more (or fewer plugins).

  • Posted on

    Managing a Kubernetes cluster with Helm and FluxCD

    As seen in my previous article, after successfully setting up a Kubernetes Cluster you can install applications on it using the Helm CLI. This is convenient to get started but might not scale well with multiple people administering the cluster.

    To make collaboration easier and avoid making mistakes by simply mistyping Helm commands, it is recommended to adopt a GitOps workflow. This consists of using Git to version control the state of the cluster. In this article, I will explain how to use FluxCD to achieve this.

    An actual helm
    Illustraton by Loik Marras

    Getting started with FluxCD

    First of all, make sure you have a running Kubernetes cluster and have installed kubectl and helm. Then you will need to create a Git repository at your favorite provider (GitHub, Bitbucket, etc…). Once this is done, clone it locally and create a releases folder inside of it. This folder will contain the files defining your Helm releases (a release is an instance of a chart running on a cluster).

    Let’s start by creating a release for my favorite IRC client, The Lounge. To do this, create a file in releases/thelounge.yaml containing the following data:

    ---
    apiVersion: helm.fluxcd.io/v1
    kind: HelmRelease
    metadata:
      name: thelounge
      namespace: default
      annotations:
        fluxcd.io/automated: "false"
        fluxcd.io/tag.chart-image: semver:~4.0
    spec:
      releaseName: thelounge
      chart:
        repository: https://halkeye.github.io/helm-charts/
        name: thelounge
        version: 4.3.0
      values:
        ingress:
          enabled: true
          hosts:
            - YOUR.OWN.DOMAIN
    

    This will be using the chart available on Artifact Hub, and the software will be running on the default namespace, and we ask for an ingress to be created. Don’t forget to commit and push the release!

    After creating this repository, you should install FluxCD on your cluster. To do this simply run the following commands:

    helm repo add fluxcd https://charts.fluxcd.io
    
    kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/crds.yaml
    
    kubectl create ns flux
    
    helm upgrade -i flux fluxcd/flux \
       --set git.url=git@github.com:[USERNAME]/[REPO] \
       --set git.branch=[BRANCH] \
       --namespace flux
    
    helm upgrade -i helm-operator fluxcd/helm-operator \
      --set git.ssh.secretName=flux-git-deploy \
      --set helm.versions=v3 \
      --namespace flux
    

    You should now see 3 pods running in the flux namespace:

    flux-memcached-869757cb88-77br8 1/1 Running 0 100s
    flux-59cb4447b9-thpr9 1/1 Running 0 101s
    helm-operator-686dc669cd-g8kn7 1/1 Running 0 89s
    

    Finally, you will need to give read and write access to your git repository to FluxCD. To do this, install fluxctl and run:

    fluxctl identity --k8s-fwd-ns flux
    

    This will print an RSA key that needs to be added as a deploy key to your repository. For GitHub users, go to https://github.com/YOURUSERNAME/YOURUSER/settings/keys/new to add the key (tick the give write access checkbox).

    Your release should start getting installed on your cluster soon. You can check the state of your helm releases by running

    kubectl get hr
    

    If needed, you can check the FluxCD logs with

    kubectl -n flux logs deployment/flux -f
    

    If you want to manually trigger FluxCD to re-read your git repository, use the command:

    fluxctl sync --k8s-fwd-ns flux
    
    A compass
    Illustraton by AbsolutVision

    Adding your charts

    Of course, you might not be able to do exactly what you want using the charts publicly available. FluxCD allows you to add your charts directly through Git.

    First, create a charts directory. Inside this directory, run the following command to create a chart named mychart:

    helm create mychart
    

    This will generate a sample chart defining everything that’s required to launch an Nginx server in production. To run this chart, you will have to define a release for it in your releases folder:

    ---
    apiVersion: helm.fluxcd.io/v1
    kind: HelmRelease
    metadata:
      name: myrelease
      namespace: default
      annotations:
        fluxcd.io/automated: "false"
        fluxcd.io/tag.chart-image: glob:3.1.1-debian-9-*
    spec:
      releaseName: myrelease
      chart:
        git: ssh://git@github.com/[USER]/[REPO]
        ref: [BRANCH]
        path: charts/mychart
      values:
        ingress:
          enabled: true
          hosts:
            - host: YOUR.OTHER.DOMAIN
              paths: ["/"]
    

    Like in the previous example, don’t forget to git commit and git push. FluxCD will automatically create the new release and you will see a magnificent Nginx welcome page appear at the specified domain.

    To be able to display something else than the Nginx welcome page, we will give it a configuration file. For this purpose, you can define a configmap in charts/mychart/templates/configmap.yaml. The following config will redirect every request to a URL of our choice:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: nginx-conf
    data:
      nginx.conf: |
        events {
          worker_connections  1024;
        }
        http {
          server {
              listen 80 default_server;
              return 301 {{ .Values.redirect }};
          }
        }
    

    You will then need to declare the volume by adding this value to spec.template.spec in charts/mychart/templates/deployment.yaml:

          volumes:
          - name: nginx-conf
            configMap:
              name: nginx-conf
    

    To mount the configuration file into the Nginx container, you need to add the following lines to spec.template.spec.containers[0] in the same file:

              volumeMounts:
              - mountPath: /etc/nginx/
                readOnly: true
                name: nginx-conf
    

    Finally, bump the version in charts/mychart/Chart.yaml, and set the redirect URL in releases/myrelease.yaml by editing the values section like this:

      values:
        ingress:
          enabled: true
          hosts:
            - host: YOUR.OTHER.DOMAIN
              paths: ["/"]
        redirect: "https://www.youtube.com/watch?v=cFVF26XPcAU"
    

    Dive deeper

    As this article only really scratches the surface of what Helm and FluxCD can do, I recommend you to also check out the official FluxCD docs and their example repository, as well as the Helm Chart Template Developer’s Guide.

    The source code for the examples shown in this article is available on GitHub.

  • Posted on

    Self-Hosted Kubernetes with k3s

    Kubernetes has been the center of attention of the DevOps community for a few years now. Yet many people find it too daunting to try it out for personal uses. Running a single-node Kubernetes cluster for self-hosting is however not that complicated. In this article, I will show you how to create this using k3s which is a lightweight distribution of Kube.

    Preparing the server

    Before launching anything Kube related, you might want to make sure the access to your server is secured. Follow this guide for instructions on how to disable root login and enable public-key authentication if it’s not already the case.

    I have encountered issues running k3s with the newer version of iptables (nftables) which comes enabled by default on many distributions (like Debian 10). You can check whether or not you’re using nftables with sudo iptables --version. You should switch back to the legacy iptables by executing:

    sudo update-alternatives --set iptables /usr/sbin/iptables-legacy 
    sudo reboot
    

    It’s also a good idea to set-up basic firewall rules to filter incoming traffic. I apply these rules for the eth0 interface which handles my server’s internet connection (find yours using ip a):

    sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT # Allow SSH
    sudo iptables -A INPUT -i eth0 -p tcp --dport 6443 -j ACCEPT # Allow kube API
    sudo iptables -A INPUT -i eth0 -p tcp --dport 10250 -j ACCEPT # Allow kube metrics
    sudo iptables -A INPUT -i eth0 -p icmp --icmp-type 8 -j ACCEPT # Allow ping
    sudo iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow responses to our requests
    sudo iptables -A INPUT -i eth0 -j DROP # Drop every other incoming packets
    

    Don’t worry about opening ports specific to the applications you want to run in your cluster since k3s will handle it directly. To persist your iptables rules after a reboot, I use the iptables-persistent package.

    sudo apt install iptables-persistent
    sudo iptables-save | sudo tee -a /etc/iptables/rules.v4
    
    Too many servers
    Illustraton by Taylor Vick

    Installing k3s

    The shell script located at https://get.k3s.io is meant to install k3s automatically. You can directly execute it by running:

    curl -sfL https://get.k3s.io | sh -
    

    You can check that the service is running by doing

    sudo service k3s status
    

    You can already run basic kubectl commands from your server’s shell. For example, we can check that everything is up and running with the cluster-info command.

    sudo k3s kubectl cluster-info
    #Kubernetes master is running at https://127.0.0.1:6443
    #CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    #Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
    #To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    

    Use kubectl and helm

    Now you will probably want to use kubectl to control your cluster without being logged in to your server. You will find the instructions on how to install kubectl on your local machine here To do this, you can find the configuration used by k3s at /etc/rancher/k3s/k3s.yaml, copy it to ~/.kube/config. Alternatively, you can pass the config to kubectl with the --kubeconfig option. You will need to change the server IP in this configuration from 127.0.0.1 to your server’s external IP address.

    Now run the kubectl cluster-info command locally and you should see results similar to what you had on your server. A few useful kubectl commands to keep in mind are:

    • kubectl get pods -> list the running pods
    • kubectl top nodes -> show the used resources on your nodes
    • kubectl logs [pod_id] -> show the stdout of a pod

    To make the installation of software easier on your cluster, helm will act as a package manager. Install helm on your local machine by following these instructions or just run this if you’re on Linux and trust their install script:

    curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
    

    Helm’s packages are called charts, they are a set of templates defining how to run specific software in Kubernetes. You can now start looking at the available charts in the Helm Hub or with the built-in command helm search hub [name].

    For example, if you want to install The Lounge (a very good self-hosted IRC client/bouncer). You will first need to add the repo that contains its chart:

    helm repo add halkeye https://halkeye.github.io/helm-charts/
    

    Running helm install will then be enough to setup Thelounge. Since you might want to access it through a specific domain, we will also ask helm to set up an ingress for it, by giving it additional options to the chart.

    This gives us the full command:

    helm install thelounge halkeye/thelounge --version 4.0.6 --set ingress.enabled=true --set ingress.hosts.0="my.domain.com"
    

    Your cluster should now be ready for any software you want to run. In a next article, I will explain how you can make better use of helm to keep track of the applications you host.

subscribe via RSS