Catégorie : Kubernetes (AKS)

AKS | OPA Gatekeeper Dashboard

Hi!

In a previous article, I show you how you can deploy a OPA Gatekeeper solution in your AKS cluster. We saw together how we can monitor the number of OPA gatekeeper violation in a second article.

In this article I will show how you can deploy a dashboard to monitor your OPA Gatekeeper violations. I will recommend you to use the solution Gatekeeper Policy Manager (GPM) created by Sighupio.

It’s very easy to deploy this solution, please clone the following repository and run this command to deploy the solution:

kubectl apply -k .

Once you’ve deployed the application, if you haven’t set up an ingress, you can access the web-UI using port-forward:

kubectl -n gatekeeper-system port-forward  svc/gatekeeper-policy-manager 8080:80

For a production usage of this solution, I recommend you to configure the OIDC authentication.

Maxime.

AKS | OPA Gatekeeper Monitoring

Hi,

In this article, I will show you how you can configure a Prometheus and Grafana solution to monitor your OPA Gatekeeper policies. The requirement is to have an existing Prometheus and Grafana stack deployed. If you don’t have an OPA Gatekeeper deployed in your AKS cluster, please follow the step of this article.

By default, when you deploy OPA Gatekeeper inside your kubernetes cluster, some OPA Gatekeeper metrics are already exposed for you! The idea is to consume these metrics via Prometheus and use Grafana to create a dashboard. In this example, we will create a new Grafana Dashboard to expose the number of violation of our dry-run OPA policies.

We will use the prometheus scrape feature to collect the OPA metrics. I will recommend you to edit the configuration of the OPA gatekeeper audit pod and add these following configuration lines:

  • prometheus.io/scrape: « true »
  • prometheus.io/port: « 8888 »
➜  ~ kubectl get pods --namespace gatekeeper-system
 NAME                                             READY   STATUS    RESTARTS   AGE
 gatekeeper-audit-576f6d6f8d-p5nvk                1/1     Running   0          18h
 gatekeeper-controller-manager-85d8bf48c9-5j2f5   1/1     Running   0          6d
 gatekeeper-controller-manager-85d8bf48c9-v2d92   1/1     Running   1          6d1h
 gatekeeper-controller-manager-85d8bf48c9-z924v   1/1     Running   0          18h
 gatekeeper-policy-manager-5bf4586996-2cmw9       1/1     Running   0          18h

➜  ~ kubectl edit pods gatekeeper-audit-576f6d6f8d-p5nvk --namespace gatekeeper-system
apiVersion: v1
 kind: Pod
 metadata:
   annotations:
     container.seccomp.security.alpha.kubernetes.io/manager: runtime/default
     prometheus.io/scrape: "true"
     prometheus.io/port: "8888"

When it’s done, we can see in the prometheus target a new resource:

Now, we can create a new Kubernetes dashboard from Grafana and add a metric to display only the gatekeeper violations for our dry-run policies:

Metrics: gatekeeper_violations{control_plane="audit-controller", enforcement_action="dryrun"}

In conclusion, we saw how you can configure Prometheus and Grafana to monitor the number of OPA gatekeeper violations. Do not hesitate to read the official documentation of OPA gatekeeper, other metrics are available to help you to monitor your OPA gatekeeper solution (Current number of known constraints, Number of observed constraint templates, …).

Maxime.

AKS | Kyverno

Bonjour,

Dans cet article, je vais vous présenter un autre « policy engine » pour Kubernetes. Dans un précédent article, j’avais pu vous présenter la solution OPA Gatekeeper. Dans cet article, nous porterons une attention particulière au projet Kyverno. Ce projet est supporté par la CNCF Foundation dans la catégorie « Sandbox ».

Les principaux avantages d’utiliser la solution Kyverno versus OPA Gatekeeper sont:

  • Ne pas ré-apprendre un nouveau langage, les policies Kyverno sont des resources Kubernetes. Comme pour OPA Gatekeeper, ces policies peuvent être déployées en mode « audit » ou « enforce ».
  • En plus de la validation, Kyverno supporte également la mutation et la génération de resources.

Dans cet exemple, je vous présente comment déployer la solution Kyverno ainsi que comment créer et déployer une policy qui a comme objectif de forcer l’application d’un label lors de la création d’un resource group. J’avais pu vous présenter un exemple similaire au sein de l’article: AKS | OPA

Déployer Kyverno:

max@Azure:~$ kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
 namespace/kyverno created
 customresourcedefinition.apiextensions.k8s.io/clusterpolicies.kyverno.io created
 customresourcedefinition.apiextensions.k8s.io/clusterpolicyreports.wgpolicyk8s.io created
 customresourcedefinition.apiextensions.k8s.io/clusterreportchangerequests.kyverno.io created
 customresourcedefinition.apiextensions.k8s.io/generaterequests.kyverno.io created
 customresourcedefinition.apiextensions.k8s.io/policies.kyverno.io created
 customresourcedefinition.apiextensions.k8s.io/policyreports.wgpolicyk8s.io created
 customresourcedefinition.apiextensions.k8s.io/reportchangerequests.kyverno.io created
 serviceaccount/kyverno-service-account created
 clusterrole.rbac.authorization.k8s.io/kyverno:admin-policies created
 clusterrole.rbac.authorization.k8s.io/kyverno:admin-policyreport created
 clusterrole.rbac.authorization.k8s.io/kyverno:admin-reportchangerequest created
 clusterrole.rbac.authorization.k8s.io/kyverno:customresources created
 clusterrole.rbac.authorization.k8s.io/kyverno:generatecontroller created
 clusterrole.rbac.authorization.k8s.io/kyverno:policycontroller created
 clusterrole.rbac.authorization.k8s.io/kyverno:userinfo created
 clusterrole.rbac.authorization.k8s.io/kyverno:webhook created
 clusterrolebinding.rbac.authorization.k8s.io/kyverno:customresources created
 clusterrolebinding.rbac.authorization.k8s.io/kyverno:generatecontroller created
 clusterrolebinding.rbac.authorization.k8s.io/kyverno:policycontroller created
 clusterrolebinding.rbac.authorization.k8s.io/kyverno:userinfo created
 clusterrolebinding.rbac.authorization.k8s.io/kyverno:webhook created
 configmap/init-config created
 service/kyverno-svc created
 deployment.apps/kyverno created

Créer notre policy:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
   name: require-ns-labels
spec:
   validationFailureAction: enforce
   rules:
    - name: require-ns-labels
      match:
        resources:
          kinds:
            - Namespace
      validate:
        message: "The label my-app is required."
        pattern:
          metadata:
            labels:
              my-app: "?*"
max@Azure:~/clouddrive$ kubectl apply -f ns-label.yaml
clusterpolicy.kyverno.io/require-ns-labels created

Test:

max@Azure:~/clouddrive$ kubectl create namespace maxime
Error from server: admission webhook "validate.kyverno.svc" denied the request:

resource Namespace//maxime was blocked due to the following policies

require-ns-labels:
  require-ns-labels: 'validation error: The label my-app is required. Rule require-ns-labels failed at path /metadata/labels/'

max@Azure:~/clouddrive$ kubectl apply -f namespace.yaml
 namespace/maxime created

 max@Azure:~/clouddrive$ kubectl get ns --show-labels
 NAME              STATUS   AGE     LABELS
 default           Active   31m     
 kube-node-lease   Active   31m     
 kube-public       Active   31m     
 kube-system       Active   31m     addonmanager.kubernetes.io/mode=Reconcile,control-plane=true,kubernetes.io/cluster-service=true
 kyverno           Active   20m     
 maxime            Active   2m36s   my-app=maxapp

Lister les policies:

max@Azure:~/clouddrive$ kubectl get cpol
 NAME                BACKGROUND   ACTION
 require-ns-labels   true         enforce

max@Azure:~/clouddrive$ kubectl describe cpol require-ns-labels
 Name:         require-ns-labels
 Namespace:
 Labels:       
 Annotations:  pod-policies.kyverno.io/autogen-controllers: DaemonSet,Deployment,Job,StatefulSet,CronJob
 API Version:  kyverno.io/v1
 Kind:         ClusterPolicy
 Metadata:
   Creation Timestamp:  2020-12-31T20:39:24Z
   Generation:          1
   Managed Fields:
     API Version:  kyverno.io/v1
     Fields Type:  FieldsV1
     fieldsV1:
       f:metadata:
         f:annotations:
           .:
           f:kubectl.kubernetes.io/last-applied-configuration:
       f:spec:
         .:
         f:validationFailureAction:
     Manager:      kubectl-client-side-apply
     Operation:    Update
     Time:         2020-12-31T20:39:24Z
     API Version:  kyverno.io/v1
     Fields Type:  FieldsV1
     fieldsV1:
       f:spec:
         f:rules:
       f:status:
         .:
         f:averageExecutionTime:
         f:resourcesBlockedCount:
         f:ruleStatus:
         f:rulesAppliedCount:
         f:rulesFailedCount:
     Manager:         kyverno
     Operation:       Update
     Time:            2020-12-31T20:46:35Z
   Resource Version:  4981
   Self Link:         /apis/kyverno.io/v1/clusterpolicies/require-ns-labels
   UID:               0c6e3e76-307b-4f6e-884c-73708e740bde
 Spec:
   Background:  true
   Rules:
     Match:
       Resources:
         Kinds:
           Namespace
     Name:  require-ns-labels
     Validate:
       Message:  The label my-app is required.
       Pattern:
         Metadata:
           Labels:
             My - App:         ?*
   Validation Failure Action:  enforce
 Status:
   Average Execution Time:   165.212µs
   Resources Blocked Count:  2
   Rule Status:
     Applied Count:            2
     Average Execution Time:   165.212µs
     Failed Count:             2
     Resources Blocked Count:  2
     Rule Name:                require-ns-labels
   Rules Applied Count:        2
   Rules Failed Count:         2
 Events:                       

Supprimer une policy:

max@Azure:~/clouddrive$ kubectl delete cpol require-ns-labels
clusterpolicy.kyverno.io "require-ns-labels" deleted
maxl@Azure:~/clouddrive$ kubectl get cpol
No resources found

Créer une policy en audit mode:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
   name: audit-ns-labels
spec:
   validationFailureAction: audit
   rules:
    - name: audit-ns-labels
      match:
        resources:
          kinds:
            - Namespace
      validate:
        message: "The label my-app is required."
        pattern:
          metadata:
            labels:
              my-app: "?*"


max@Azure:~/clouddrive$ kubectl apply -f ns-label.yaml
clusterpolicy.kyverno.io/require-ns-labels created

max@Azure:~/clouddrive$ kubectl create namespace maxime
namespace/maxime created

max@Azure:~/clouddrive$ kubectl get cpol
NAME              BACKGROUND   ACTION
audit-ns-labels   true         audit

max@Azure:~/clouddrive$ kubectl describe cpol audit-ns-labels
 Name:         audit-ns-labels
 Namespace:
 Labels:       
 Annotations:  pod-policies.kyverno.io/autogen-controllers: DaemonSet,Deployment,Job,StatefulSet,CronJob
 API Version:  kyverno.io/v1
 Kind:         ClusterPolicy
 Metadata:
   Creation Timestamp:  2020-12-31T21:02:50Z
   Generation:          1
   Managed Fields:
     API Version:  kyverno.io/v1
     Fields Type:  FieldsV1
     fieldsV1:
       f:metadata:
         f:annotations:
           .:
           f:kubectl.kubernetes.io/last-applied-configuration:
       f:spec:
         .:
         f:validationFailureAction:
     Manager:      kubectl-client-side-apply
     Operation:    Update
     Time:         2020-12-31T21:02:50Z
     API Version:  kyverno.io/v1
     Fields Type:  FieldsV1
     fieldsV1:
       f:spec:
         f:rules:
       f:status:
         .:
         f:averageExecutionTime:
         f:ruleStatus:
         f:rulesFailedCount:
     Manager:         kyverno
     Operation:       Update
     Time:            2020-12-31T21:04:35Z
   Resource Version:  7592
   Self Link:         /apis/kyverno.io/v1/clusterpolicies/audit-ns-labels
   UID:               13bc5039-f6ac-4214-aa1a-40ddd932b39c
 Spec:
   Background:  true
   Rules:
     Match:
       Resources:
         Kinds:
           Namespace
     Name:  audit-ns-labels
     Validate:
       Message:  The label my-app is required.
       Pattern:
         Metadata:
           Labels:
             My - App:         ?*
   Validation Failure Action:  audit
 Status:
   Average Execution Time:  26.002µs
   Rule Status:
     Average Execution Time:  26.002µs
     Failed Count:            2
     Rule Name:               audit-ns-labels
   Rules Failed Count:        2
 Events:                      

Vous pouvez retrouver une suite d’exemples de policies à l’adresse ci-dessous: https://github.com/kyverno/kyverno/tree/main/samples

En conclusion, nous pouvons constater que Kyverno est une solution très intéressante et simple d’utilisation. A noter qu’une fonctionnalité de reporting est en cours de développement. Je reviendrai vous présenter cette fonctionnalité dans un prochain article.

Maxime.