Enable Policy Management with fleet

The easiest way to enable policy management with fleet.

In this tutorial we’ll cover the basics of how to use Fleet to manage policies on a group of clusters.


Fleet’s multi cluster policy management is built on top Kyverno, the overall architecture is shown as below:


  1. Setup Fleet manager following the instructions in the installation guide.

  2. Running the following command to create two secrets to access attached clusters.

kubectl create secret generic kurator-member1 --from-file=kurator-member1.config=/root/.kube/kurator-member1.config
kubectl create secret generic kurator-member2 --from-file=kurator-member2.config=/root/.kube/kurator-member2.config

Create a fleet with pod security policy enabled

Run following command to enable baseline pod security check:

kubectl apply -f examples/fleet/policy/kyverno.yaml

After a while, we can see the fleet is ready:

kubectl wait fleet quickstart --for='jsonpath='{.status.phase}'=Ready'

Verify pod security policy

Run following command to create a invalid pod in the fleet:

cat <<EOF | kubectl apply -f -
apiVersion: apps.kurator.dev/v1alpha1
kind: Application
  name: kyverno-policy-demo
  namespace: default
      interval: 3m0s
        branch: main
      timeout: 1m0s
      url: https://github.com/kurator-dev/kurator
    - destination:
        fleet: quickstart
        interval: 5m0s
        path: ./examples/fleet/policy/badpod-demo
        prune: true
        timeout: 2m0s

After a while you can check policy report with following command:

kubectl get policyreport --kubeconfig=/root/.kube/kurator-member1.config

you will see warning message like following:

NAME                                  PASS   FAIL   WARN   ERROR   SKIP   AGE
cpol-disallow-capabilities            1      0      0      0       0      17s
cpol-disallow-host-namespaces         0      1      0      0       0      17s
cpol-disallow-host-path               1      0      0      0       0      17s
cpol-disallow-host-ports              1      0      0      0       0      17s
cpol-disallow-host-process            1      0      0      0       0      17s
cpol-disallow-privileged-containers   1      0      0      0       0      17s
cpol-disallow-proc-mount              1      0      0      0       0      17s
cpol-disallow-selinux                 2      0      0      0       0      17s
cpol-restrict-apparmor-profiles       1      0      0      0       0      17s
cpol-restrict-seccomp                 1      0      0      0       0      17s
cpol-restrict-sysctls                 1      0      0      0       0      17s

check pod event:

kubectl describe pod badpod --kubeconfig=/root/.kube/kurator-member1.config | grep PolicyViolation
  Warning  PolicyViolation  90s    kyverno-scan       policy disallow-host-namespaces/host-namespaces fail: validation error: Sharing the host namespaces is disallowed. The fields spec.hostNetwork, spec.hostIPC, and spec.hostPID must be unset or set to `false`. rule host-namespaces failed at path /spec/hostIPC/

Apply more policies with fleet application

You can find more policies from Kyverno, and sync to clusters with Fleet Application.


Delete the fleet created

kubectl delete application kyverno-policy-demo
kubectl delete fleet quickstart

Uninstall fleet manager:

helm uninstall kurator-fleet-manager -n kurator-system

IMPORTANT: In order to ensure a proper cleanup of your infrastructure you must always delete the cluster object. Deleting the entire cluster template with kubectl delete -f capi-quickstart.yaml might lead to pending resources to be cleaned up manually.

kubectl delete cluster --all

Uninstall cluster operator:

helm uninstall kurator-cluster-operator -n kurator-system

Optional, clean CRDs:

kubectl delete crd $(kubectl get crds | grep cluster.x-k8s.io | awk '{print $1}')
kubectl delete crd $(kubectl get crds | grep kurator.dev | awk '{print $1}')

Optional, delete namespace:

kubectl delete ns kurator-system

Optional, unintall cert manager:

helm uninstall -n cert-manager cert-manager

Optional, shutdown cluster:

kind delete cluster --name kurator