In this tutorial I will explain how to configure and expose an external DNS server for a K3S cluster using k8s_gateway (Archived).
Before starting I suggest you to create a git repository cluster-management
where we will insert the various yaml files that we deploy so that you can track the modifications made and rebuild the cluster from scratch if you ever need to.
For this part you will need to have dig
installed to perform DNS queries.
We will call MASTER_IP
the IPv4 of your master node and MASTER_FQDN
a registered domain name.
We will fake having two nameservers (ns1.${MASTER_FQDN}
and ns2.${MASTER_FQDN}
) that will both point to the same machine.
You should proceed by registering custom nameservers with your domain registrar and their glue records, both pointing to ${MASTER_IP}
.
Then we can configure and deploy the following example for the external DNS service.
# 01-dns.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: excoredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: excoredns
namespace: kube-system
data:
Corefile: |-
.:53 {
errors
log
ready
k8s_gateway ${MASTER_FQDN} {
resources Ingress
ttl 1800
fallthrough
}
file /etc/coredns/db.${MASTER_FQDN}
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
# Below entry is written in RFC 1035 format and allows other kinds of records to be inserted
db.${MASTER_FQDN}: |-
$ORIGIN ${MASTER_FQDN}.
@ IN SOA ns1.${MASTER_FQDN}. ns2.${MASTER_FQDN}. (
2022042701 ; SERIAL
1800 ; REFRESH
600 ; RETRY
3600000 ; EXPIRE
60 ; MINIMUM
)
; Glue Records
ns1 IN A ${MASTER_IP}
ns2 IN A ${MASTER_IP}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: excoredns
rules:
- apiGroups:
- ""
resources:
- services
- namespaces
verbs:
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: excoredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: excoredns
subjects:
- kind: ServiceAccount
name: excoredns
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
name: external-dns
namespace: kube-system
spec:
selector:
k8s-app: "excoredns"
ports:
- {port: 53, protocol: UDP, name: udp-53}
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: excoredns
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: "excoredns"
template:
metadata:
labels:
k8s-app: "excoredns"
spec:
serviceAccountName: excoredns
dnsPolicy: ClusterFirst
containers:
- name: "coredns"
image: "quay.io/oriedge/k8s_gateway"
imagePullPolicy: IfNotPresent
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
ports:
- {containerPort: 53, protocol: UDP, name: udp-53}
- {containerPort: 53, protocol: TCP, name: tcp-53}
volumes:
- name: config-volume
configMap:
name: excoredns
items:
- key: Corefile
path: Corefile
- key: db.${MASTER_FQDN}
path: db.${MASTER_FQDN}
kubectl apply -f 01-dns.yaml
The plugin only supports A
records, automatically created from Ingresses
definitions.
To insert other kinds of records you can edit the db.${MASTER_FQDN}
ConfigMap
entry, which is written in RFC 1035. Always remember to update the SERIAL
value so that DNS caches are evicted!
Remember to open the port 53/udp
on the firewall to allow DNS queries to get through:
sudo firewall-cmd --permanent --add-port=53/udp
sudo firewall-cmd --reload
You can now test that the service exists:
kubectl get svc -n kube-system
and search for the cluster-ip
of external-dns
, then test resolution of the glue records:
dig @${cluster-ip} ns1.${MASTER_FQDN}
and test them also from another computer (which has to reach for public DNS records):
dig @${MASTER_IP} ns1.${MASTER_FQDN}
dig ns1.${MASTER_FQDN}
and they should both resolve correctly. The second one may take a bit to resolve successfully since you have to wait for the propagation of DNS records in public servers.