Federation V1, the current Kubernetes federation API which reuses the Kubernetes API resources ‘as is’, is currently considered alpha for many of its features, and there is no clear path to evolve the API to GA. However, there is a
Federation V2 effort in progress to implement a dedicated federation API apart from the Kubernetes API. The details can be found at sig-multicluster community page.
This page shows how to configure and deploy CoreDNS to be used as the DNS provider for Cluster Federation.
LoadBalancerservices in member clusters of federation is mandatory to enable
CoreDNSfor service discovery across federated clusters.
CoreDNS can be deployed in various configurations. Explained below is a reference and can be tweaked to suit the needs of the platform and the cluster federation.
To deploy CoreDNS, we shall make use of helm charts. CoreDNS will be deployed with etcd as the backend and should be pre-installed. etcd can also be deployed using helm charts. Shown below are the instructions to deploy etcd.
helm install --namespace my-namespace --name etcd-operator stable/etcd-operator helm upgrade --namespace my-namespace --set cluster.enabled=true etcd-operator stable/etcd-operator
Note: etcd default deployment configurations can be overridden, suiting the host cluster.
After deployment succeeds, etcd can be accessed with the http://etcd-cluster.my-namespace:2379 endpoint within the host cluster.
The CoreDNS default configuration should be customized to suit the federation. Shown below is the Values.yaml, which overrides the default configuration parameters on the CoreDNS chart.
The above configuration file needs some explanation:
isClusterServicespecifies whether CoreDNS should be deployed as a cluster-service, which is the default. You need to set it to false, so that CoreDNS is deployed as a Kubernetes application service.
serviceTypespecifies the type of Kubernetes service to be created for CoreDNS. You need to choose either “LoadBalancer” or “NodePort” to make the CoreDNS service accessible outside the Kubernetes cluster.
plugins.kubernetes, which is enabled by default by setting
plugins.etcd.zonesas shown above.
Now deploy CoreDNS by running
helm install --namespace my-namespace --name coredns -f Values.yaml stable/coredns
Verify that both etcd and CoreDNS pods are running as expected.
The Federation control plane can be deployed using
kubefed init. CoreDNS
can be chosen as the DNS provider by specifying two additional parameters.
coredns-provider.conf has below format:
[Global] etcd-endpoints = http://etcd-cluster.my-namespace:2379 zones = example.com. coredns-endpoints = <coredns-server-ip>:<port>
etcd-endpointsis the endpoint to access etcd.
zonesis the federation domain for which CoreDNS is authoritative and is same as –dns-zone-name flag of
coredns-endpointsis the endpoint to access CoreDNS server. This is an optional parameter introduced from v1.7 onwards.
Note: plugins.etcd.zones in CoreDNS configuration and –dns-zone-name flag to kubefed init should match.
Note: The following section applies only to versions prior to v1.7
and will be automatically taken care of if the
parameter is configured in
coredns-provider.conf as described in
Once the federation control plane is deployed and federated clusters
are joined to the federation, you need to add the CoreDNS server to the
pod’s nameserver resolv.conf chain in all the federated clusters as this
self hosted CoreDNS server is not discoverable publicly. This can be
achieved by adding the below line to
dnsmasq container’s arg in
example.com above with federation domain.
Now the federated cluster is ready for cross-cluster service discovery!