Cluster API was released in 2021. I’ve known this long time back and just got it to work with it in 2026. The reason of this purpose is to create and delete the cluster easily without extra automation, either setting up a node or cluster using kubeadm tool. We just have to create a YAML file and apply it. Based on the provider we define in Cluster API, it creates a cluster automatically using kubeadm, which is the common tool.
Kubernetes audit logs are written by the API server. They record API requests (subject, verb, resource, and optionally request or response bodies) according to an audit policy (audit.k8s.io). You turn them on with kube-apiserver flags such as --audit-log-path and --audit-policy-file, plus rotation settings.
If you’re thinking of Rancher, Cluster API has no GUI. It does a similar thing to help you manage the cluster programmatically. You need a cluster that will host Cluster API, and you can follow the installation steps from the documentation at https://cluster-api.sigs.k8s.io/user/quick-start.html. clusterctl will be our main tool to interact with the cluster.
The provider I’m using with Cluster API is Incus (https://capn.linuxcontainers.org/tutorial/quick-start.html), a fork of LXD. I host it on my homelab and use kind to deploy the Cluster API components.
You can generate a starting cluster manifest with the command below. Swap in your cluster name, Kubernetes version, and control plane or worker counts.
clusterctl generate cluster <cluster_name> -i incus --kubernetes-version v1.3x.0 --control-plane-machine-count 1 --worker-machine-count 1 > cluster.yaml
It does not expose every kube-apiserver audit flag the way a raw kubeadm config would. In Cluster API, the knob you want is the KubeadmControlPlaneTemplate its spec.template.spec.kubeadmConfigSpec is what kubeadm uses when it bootstraps the control plane (files dropped on disk, clusterConfiguration, patches, and so on). A trimmed example of that shape looks like this:
spec:
template:
spec:
kubeadmConfigSpec:
files:
- content: placeholder file to prevent kubelet path not found errors
owner: root:root
path: /etc/kubernetes/manifests/.placeholder
permissions: "0400"
- content: placeholder file to prevent kubeadm path not found errors
owner: root:root
path: /etc/kubernetes/patches/.placeholder
permissions: "0400"
- content: |
---
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
mode: iptables
conntrack:
maxPerCore: 0
owner: root:root
path: /run/kubeadm/hack-kube-proxy-config-lxc.yaml
permissions: "0444"
...
To add extra configuration that the defaults don’t include, use clusterConfiguration and put the kubeadm bits you need there. Because I don’t want to change the default templates only to add audit logs, I clone the existing configuration and edit the files manually. Below are the objects we need for the new cluster configuration.
kubectl get clusterclass capn-default -o yaml > capn-audit-clusterclass.yaml
kubectl get kubeadmcontrolplanetemplates capn-default-control-plane -o yaml > capn-audit-control-plane.yaml
kubectl get lxcclustertemplate capn-default-lxc-cluster -o yaml > capn-audit-lxc-cluster.yaml
kubectl get lxcmachinetemplates capn-default-control-plane -o yaml > capn-audit-machine-template.yaml
The names above match a default ClusterClass bundle for the Incus/CAPN provider. The ClusterClass references infrastructure and control plane templates. KubeadmControlPlaneTemplate holds kubeadm and API server settings. The LXC templates describe how nodes are created.
Rename references so new objects do not collide with capn-default on the management cluster. After this we use capn-audit as the new ClusterClass name when provisioning clusters that should include audit configuration.
sed -i \
-e 's/name: capn-default$/name: capn-audit/' \
-e 's/capn-default-control-plane/capn-audit-control-plane/g' \
-e 's/capn-default-lxc-cluster/capn-audit-lxc-cluster/g' \
capn-audit-clusterclass.yaml
sed -i 's/name: capn-default-control-plane/name: capn-audit-control-plane/' capn-audit-control-plane.yaml
sed -i 's/capn-default-lxc-cluster/capn-audit-lxc-cluster/g' capn-audit-lxc-cluster.yaml
sed -i 's/capn-default-control-plane/capn-audit-control-plane/g' capn-audit-machine-template.yaml
Enable audit logs in the control plane template file below.
nano capn-audit-control-plane.yaml
Add the configuration below, along with the volume, since the policy file needs to be in place before the API server can use it. In kubeadm, the files section writes /etc/kubernetes/audit/policy.yaml to the node during bootstrap, similar to how you would add other configuration files or drop-ins.
clusterConfiguration:
apiServer:
extraArgs:
- name: audit-log-maxage
value: "10"
- name: audit-log-maxbackup
value: "5"
- name: audit-log-maxsize
value: "30"
- name: audit-log-path
value: /var/log/kubernetes/audit/audit.log
- name: audit-policy-file
value: /etc/kubernetes/audit/policy.yaml
extraVolumes:
- hostPath: /etc/kubernetes/audit/policy.yaml
mountPath: /etc/kubernetes/audit/policy.yaml
name: audit-policy
pathType: File
readOnly: true
- hostPath: /var/log/kubernetes/audit
mountPath: /var/log/kubernetes/audit
name: audit-logs
pathType: DirectoryOrCreate
The next configuration goes under files:.
- content: |
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
owner: root:root
path: /etc/kubernetes/audit/policy.yaml
permissions: "0600"
The sample rule uses RequestResponse for everything so you can confirm logging quickly. It is noisy on disk and may capture sensitive payloads, so tighten the policy before you rely on this in production (see the upstream audit documentation).
There are two configurations you need to edit manually. Remove the annotations and ownerReferences from capn-audit-control-plane.yaml and capn-audit-lxc-cluster.yaml.
Once we are all set, you can apply the new configuration using the command below.
kubectl apply -f capn-audit-clusterclass.yaml
kubectl apply -f capn-audit-lxc-cluster.yaml
kubectl apply -f capn-audit-control-plane.yaml
kubectl apply -f capn-audit-machine-template.yaml
Now, to use the new configuration in a new cluster, update topology.classRef.name to the new class capn-audit. This tells the cluster to use the updated template that includes the audit policy and related settings.
Once the workload cluster is ready, use that cluster’s kubeconfig and verify that audit-related flags show up on the API server container:
kubectl get pod -n kube-system -l component=kube-apiserver \
-o jsonpath='{.items[0].spec.containers[0].command}' | tr ',' '\n' | grep -i audit
You should see lines similar to:
"--audit-log-maxage=10"
"--audit-log-maxbackup=5"
"--audit-log-maxsize=30"
"--audit-log-path=/var/log/kubernetes/audit/audit.log"
"--audit-policy-file=/etc/kubernetes/audit/policy.yaml"