GitLab Runner in Kubernetes with MinIO cache (Helm 3)

Christof Aenderl
4 min readOct 28, 2020

Install MinIO in a Kubernetes cluster and use it as cache storage for your GitLab CI Runner. We’ll use Helm 3 to install both.

As mentioned in my article about setting up GitLab CI with MinIO using Helm 2, the default, one click installation from the GitLab admin panel is absolutely simple and works fine as long as you have no special requirement. Well, a cache for your CI jobs is not really a special requirement. However, setting it up can be more complicated you would expect.

Why MinIO? The GitLab Runner requires a cloud storage like Amazon S3 or Google Cloud Storage. With MinIO you can easily emulate a S3 storage running in your own cluster even if you’re not using AWS. For example, this would also work for playing around in Minikube.

The tutorial consists of three parts

  1. Setting up namespace, service account and role
  2. Deploy MinIO using Helm 3
  3. Deploy GitLab Runner using Helm 3

I expect you have Helm 3 installed and admin access to a Kubernetes cluster using kubectl.

Setting up namespace, service account and role

I’m using an own namespace gitlab-runner for the installation to have a better separation from other deployments in the cluster.

Create namespace gitlab-runner

kubectl create namespace gitlab-runner

Create a service account gitlab-runner in namespace gitlab-runner

kubectl create serviceaccount gitlab-runner -n gitlab-runner

Create the role with required permissions in the namespace and bind it to service account. This is defined in file role-runner.yaml

which we apply to the cluster

kubectl create -f role-runner.yaml

Deploy MinIO

Before we deploy MinIO we need to provide a PersistentVolume for it. This step really depends on your cluster. For example, if your cluster supports dynamic volume provisioning there’s nothing to do here because the deployment will automatically create the volume. This wasn’t the case for me, so I created a PersistentVolume and the PersistentVolumeClaim and told Helm to use that for MinIO.

To create the volume and claim we’ll use the file minio-standalone-pvc.yaml where we defined 4Gi storage size. Set more storage if you think that’s not enough for your use case.

Let’s create both, pv and pvc on the cluster

kubectl create -f minio-standalone-pvc.yaml

Now we can install MinIO. But first we need to add the helm.min.io repo like this

helm repo add minio https://helm.min.io/

For the MinIO installation a few custom values are necessary which we add to a minio/values.yaml file.

Those are mainly

  • securityContext: I disable it to run the MinIO container as root. This is for sure nothing you should do in a production or serious environment. Here it is easier because MinIO itself is able to create the required export folder for the linked volume inside the container.
  • accessKey and secretKey: I’m also not using something secure here.
  • persistence: Setting the PersistentVolumeClaim we created before. If your cluster supports volume provisioning, leave it away and the volume is create while deployment (you can set the size here).
  • defaultBucket: This will create a bucket on startup or use if already exists.

All possible values and details about the installation are explained here https://github.com/minio/charts

The installation is just a single command using Helm 3

helm install minio minio/minio -n gitlab-runner -f minio/values.yaml

With helm install we set the name, the chart, the namespace with -n and the custom values file with -f.

Now you can check if minio pod and service are running. If you wish, you can also access MinIO UI with a browser. Simply do a port-forward

kubectl port-forward service/minio -n gitlab-runner 9000:9000

and open localhost:9000 in your browser. Use the accessKey (minio) and secretKey (minio123) from minio/values.yaml above to login.

And finally …

Deploy the GitLab Runner

Again, add the gitlab repo to helm

helm repo add gitlab https://charts.gitlab.io

and create a gitlab-runner/values.yaml file with custom settings as shown below. The gitlabUrl and runnerRegistrationToken are available in the GitLab admin panel, Runners.

Worth mentioning here are:

  • rbac: We use our service account, create in the first step here
  • cache: Settings to use our MinIO service

Again security! I’m setting the runners to be privileged which is of course easier but depending on your environment not recommended.

The gitlab runner helm chart is explained here https://docs.gitlab.com/runner/install/kubernetes.html

Let’s go, install it

helm install gitlab-runner gitlab/gitlab-runner -n gitlab-runner -f gitlab-runner/values.yaml

Ok, that installed the latest version of gitlab-runner. But maybe you don’t have the latest version of GitLab running (as my company has). No problem, uninstall

helm uninstall gitlab-runner -n gitlab-runner

search for the right chart version providing your gitlab-runner version

helm search repo -l gitlab/gitlab-runner

where APP VERSION is the GitLab version. And, install e.g. for GitLab version 13.2

helm install  gitlab-runner gitlab/gitlab-runner -n gitlab-runner -f gitlab-runner/values.yaml --version "0.20.2"

Done ✌️

Now you should see the created runner in your GitLab Admin Area for further configuration inside GitLab.

Kubernetes dashboard showing the gitlab-runner namespace

--

--