GitLab Runner in Kubernetes with MinIO cache

Christof Aenderl
4 min readOct 16, 2019
Munich autumn sky

Setup a GitLab Runner in Kubernetes using MinIO for caching with the Helm 2 package manager.

Updated version for Helm 3 can be found here.

Connecting a Kubernetes cluster to GitLab is pretty simple. Even simpler, the installation of a GitLab Runner when you do it from the GitLab admin area in Kubernetes Applications. Install “Helm Tiller” (older GitLab versions) is one click and “GitLab Runner” the second. That works, but …

The so installed runner has a simple, default configuration and is not using any cache. That’s ok for just trying CI in GitLab but for a serious installation you want to tweak the settings and use a cache.

There’re several instructions for setting up GitLab Runner in Kubernetes but it took me some time to figure out how it works and finally make it work. I hope this step-by-step guide provides some help or hints. Still I’m not a Kubernetes expert. So, if you find something to improve, I’m happy to get feedback.

We use MinIO because the GitLab Runner in Kubernetes needs a distributed cache which is either S3 or GCS. If you want a cache within your cluster, the MinIO service emulates the S3 cache type. So in the gitlab-runner settings we configure S3 pointing to our minio-service.

Requirements:

* A Kubernetes cluster (or minikube). I used a cluster with version 1.14.5

* Install Helm client https://helm.sh/docs/using_helm/

* Ensure that kubectl is using the right cluster (helm uses kubectl context)

First I’ll create an own namespace and a role for that namespace that is used by the service account. Using a namespace and limited service account adds a bit complexity. If you don’t need the namespace, things might get easier.

Create namespace gitlab-runner

kubectl create namespace gitlab-runner

Create a service account gitlab-runner in namespace gitlab-runner

kubectl create serviceaccount gitlab-runner -n gitlab-runner

Create the role with permission in namespace and bind it to service account defined in file role-runner.yaml

role-runner.yaml
kubectl create -f role-runner.yaml

Init helm for the namespace and let it install tiller in the namespace. Using the service account created before.

helm init --service-account gitlab-runner --tiller-namespace gitlab-runner

Now with helm and tiller ready, we can install MinIO and GitLab Runner.

Let’s start with MinIO. Because the service account of the namespace is limited, it can’t create a persistent volume. Usually this would be done by helm but now we have to create it manually upfront and tell helm to use the existing volume.

The file minio-standalone-pvc.yaml defines the config for the PersitentVolume and the claim for the volume. In this example a size of 4 GB is used. If you plan to use more, increase the value.

minio-standalone-pvc.yaml
kubectl create -f minio-standalone-pvc.yaml

Now we install MinIO with helm and add some custom settings we put inside a minio/values.yaml file. For more details on the MinIO helm chart see: https://github.com/helm/charts/tree/master/stable/minio

Important is here, that we set the accessKey and secretKey that will later be used by the GitLab Runner installation. Also the default bucket is created that the GitLab Runner will use to store the cache files.

minio/values.yaml
helm --tiller-namespace gitlab-runner install --namespace gitlab-runner --name minio -f minio/values.yaml stable/minio

This step is optional: If you wanna access the MinIO browser from outside Kubernetes, you can create a NodePort or a temporary port-forward.

The NodePort is defined in the minio-standalone-service.yaml as an own service.

minio-standalone-service.yaml
kubectl create -f minio-standalone-service.yaml

Find out your cluster IP:

kubectl cluster-info

and the port the minio-service is mapped to:

kubectl get services -n gitlab-runner

The two commands will output the following where you can get the IP and the port, that is exposed.

The minio-service-ext service is mapped to port 32028. Open in a browser: http://10.159.0.133:32028 and you should see the MinIO login page.

The login accessKey and secretKey we set in the values.yaml for MinIO install to “minio” and “minio123”.

As mentioned, there’s also the temporary solution of creating a port-forward. Just run:

kubectl port-forward service/minio -n gitlab-runner 9000:9000

and open localhost:9000 in your browser.

Ok, finally lets install the GitLab Runner itself. This is described in more details here: https://docs.gitlab.com/runner/install/kubernetes.html#configuring-gitlab-runner-using-the-helm-chart.

Helm should have created a secret named “minio” in the namespace which is used by gitlab-runner. If this is not the case, create one.

Check if secret exists:

kubectl get secrets -n gitlab-runner

Create it if secret doesn’t exist:

kubectl create secret generic minio --from-literal=accesskey=minio -from-literal=secretkey=minio123 -n gitlab-runner

The gitlab-runner is not part of the helm repo so we have to add it to helm.

helm repo add gitlab https://charts.gitlab.io

We also set some custom values for this helm chart in runner/values.yaml

On the GitLab site you’ll find more details about all the config values in the helm chart. This is the minimal set including the settings for the cache.

runner/values.yaml

Now install it to the namespace. Take care, you might want to install an older chart version depending on your GitLab server version.

helm --tiller-namespace gitlab-runner install --namespace gitlab-runner --name gitlab-runner -f runner/values.yaml gitlab/gitlab-runner

Done ✌️

Now you should see the created runner in your GitLab Admin Area for further configuration inside GitLab.

Please note that we’ve not really considered security and performance in this setup. You might also need different settings depending on your Kubernetes version and installation. The example above runs a Kubernetes v1.14.5.

--

--