header image

If you’ve never heard of pi-hole, it’s a fantastic tool that blocks DNS requests to ad servers. That means you can surf the web without having to look at ads on every page.

I’m a big fan of running absolutely everything in docker, so I previously had a pi-hole container on my network, but it’s time to up my game.

Before we begin

For the purposes of this tutorial, I’m going to assume that you’ve assigned your Raspberry Pi a static IP address and know how to route DNS requests to it. If you’ve read my previous post on setting up a kubernetes cluster on Raspberry Pis, you’ll be able to follow along as I get pi-hole running in kubernetes.

And finally, if at any point you have questions about definition schema or I haven’t linked documentation, you may be able to find answers in the kubernetes documentation.

Let’s get started

The biggest gotcha for running pi-hole containers is the need for persistent storage. Where volumes are a bit more of a free-for-all in docker, they need to be specified more explicitly in kubernetes. For this tutorial, we’re going to use local storage for our volumes. This isn’t the best way to do persistent storage, but it’s simpler to set up and this was my first dive into kubernetes.

Here’s the docker command I used to create the pi-hole container as a reference for the values we’ll be building our manifest with:

docker run -d \
  --name pihole \
  -p 53:53/tcp -p 53:53/udp \
  -p 8000:80 \
  -e TZ="America/New_York" \
  -e WEBPASSWORD="secret" \
  -v /path/to/volume/etc/:/etc/pihole/ \
  -v /path/to/volume/dnsmasq.d/:/etc/dnsmasq.d/ \
  --dns=0.0.0.0 --dns=1.1.1.1 \
  pihole/pihole:latest

Define the StorageClass

Create a pihole.yaml file and we’ll get started building our manifest with a StorageClass definition. Keep in mind that kubernetes requires the manifest to be indented using spaces, not tabs.

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Define a PersistentVolume

Now that our StorageClass is defined, we need to start using it. Next up is our first PersistentVolume definition.

As we’re adding things to pihole.yaml, make sure the definitions are delimited by ---. These definitions can also be stored in their own separate files, though it will change our final kubectl create command slightly.

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pihole-local-etc-volume
  labels:
    directory: etc
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local
  local:
    path: /path/to/volume/on/host
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node1

Awesome! We now have a volume and a definition of how our host should allocate disk space to the volume.

Define a PersistentVolumeClaim

Next up is our PersistentVolumeClaim that our service will use to attach to that volume.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pihole-local-etc-claim
spec:
  storageClassName: local
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      directory: etc

Define a PV and PVC for dnsmasq.d

Now we need to create a PersistentVolume and PersistentVolumeClaim for dnsmasq.d (in addition to the ones defined above for etc). Make sure that the value you use for /path/to/volume/on/host is different than the one used above.

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pihole-local-dnsmasq-volume
  labels:
    directory: dnsmasq.d
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local
  local:
    path: /path/to/volume/on/host
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pihole-local-dnsmasq-claim
spec:
  storageClassName: local
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi
  selector:
    matchLabels:
      directory: dnsmasq.d

Define the Deployment

Almost done! There are many ways to control how our app gets deployed. For our use case, we’re going to use a Deployment. This specifies a desired state for our app, which kubernetes then attempts to attain.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pihole
  labels:
    app: pihole
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pihole
  template:
    metadata:
      labels:
        app: pihole
        name: pihole
    spec:
      containers:
      - name: pihole
        image: pihole/pihole:latest
        imagePullPolicy: Always
        env:
        - name: TZ
          value: "America/New_York"
        - name: WEBPASSWORD
          value: "secret"
        volumeMounts:
        - name: pihole-local-etc-volume
          mountPath: "/etc/pihole"
        - name: pihole-local-dnsmasq-volume
          mountPath: "/etc/dnsmasq.d"
      volumes:
      - name: pihole-local-etc-volume
        persistentVolumeClaim:
          claimName: pihole-local-etc-claim
      - name: pihole-local-dnsmasq-volume
        persistentVolumeClaim:
          claimName: pihole-local-dnsmasq-claim

Define a Service

The last thing we need to do is define our pi-hole Service. A Service allows us to expose an application running on our cluster externally over the network.

---
apiVersion: v1
kind: Service
metadata:
  name: pihole
spec:
  selector:
    app: pihole
  ports:
  - port: 8000
    targetPort: 80
    name: pihole-admin
  - port: 53
    targetPort: 53
    protocol: TCP
    name: dns-tcp
  - port: 53
    targetPort: 53
    protocol: UDP
    name: dns-udp
  externalIPs:
  - node1.ip.address

For our simple use case, we’re going to directly assign the Service an external IP address. This prevents us from using some of kubernetes’ more useful features like load balancing and external traffic policies, but is straight forward to understand and set up.

Time to deploy

Save the manifest and we’re ready to deploy pi-hole. Run kubectl create -f pihole.yaml and we’re all set! If you elected to separate your definitions into their own files, you must kubectl create each file individually. You can use the following commands to keep an eye on your deployment and make sure everything is proceeding smoothly:

When experimenting with kubernetes myself, I found this page very helpful.

Now, go to http://node1.ip.address:8000/admin, enter the password we defined earlier (or the randomly generated one pulled from the logs), and check out the pi-hole dashboard! So long as you’ve directed your traffic to node1 for DNS requests, you’re now blocking ad domains.


A note about our Service

As I mentioned earlier, we’re using a basic Service definition with an explicitly defined external IP address. The downside to this is that all requests reflected in the pihole dashboard will be forwarded from the kube-dns cluster internal IP address. We still have all the data on where our requests are going, but this keeps us from understanding who’s making the requests.

This will, likely, be the subject of my next article as I explore how to load balance multiple pihole instances and properly forward external traffic to them.