Catégorie : Kubernetes (AKS)

Azure AKS avec Terraform

Salut!

Aujourd’hui nous allons voir ensemble comment nous pouvons automatiser le déploiement de notre cluster AKS avec Terraform.

Je vous invite à lire l’article que j’avais pu écrire il y a quelques mois : Automatiser votre infrastructure Azure avec Terraform.

resource "azurerm_resource_group" "demok8s" {
 name = "k8sterraform"
 location = "East US" (Attention! Seules certaines régions sont pour le moment disponibles: East US, West Europe, Central US, Canada Central and Canada East.)
}

resource "azurerm_kubernetes_cluster" "demok8s" {
 name = "k8sterraform"
 location = "${azurerm_resource_group.demok8s.location}"
 resource_group_name = "${azurerm_resource_group.demok8s.name}"
 kubernetes_version = "1.8.2"
 dns_prefix = "k8sterraform"

linux_profile {
 admin_username = "acctestuser1"

ssh_key {
 key_data = "ssh-rsa AAAABCDE"
 }
 }

agent_pool_profile {
 name = "default"
 count = 1
 vm_size = "Standard_A0"
 os_type = "Linux"
 }

service_principal {
 client_id = "Client_Id_A_Remplacer"
 client_secret = "SP_Secret_A_Remplacer"
 }

tags {
 Environment = "Demo"
 }
}
maxime@Azure:~/aks-terrform$ terraform init

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.azurerm: version = "~> 1.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
maxime@Azure:~/aks-terrform$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
 + create

Terraform will perform the following actions:

+ azurerm_kubernetes_cluster.demok8s
 id: <computed>
 agent_pool_profile.#: "1"
 agent_pool_profile.0.count: "1"
 agent_pool_profile.0.dns_prefix: <computed>
 agent_pool_profile.0.fqdn: <computed>
 agent_pool_profile.0.name: "default"
 agent_pool_profile.0.os_type: "Linux"
 agent_pool_profile.0.vm_size: "Standard_A0"
 dns_prefix: "k8sterraform"
 kubernetes_version: "1.8.2"
 linux_profile.#: "1"
 linux_profile.0.admin_username: "acctestuser1"
 linux_profile.0.ssh_key.#: "1"
 linux_profile.0.ssh_key.0.key_data: "ssh-rsa AAAABCDEF"
 location: "eastus"
 name: "k8sterraform"
 resource_group_name: "k8sterraform"
 service_principal.#: "1"
 service_principal.2388863275.client_id: "0c1484fa-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
 service_principal.2388863275.client_secret: <sensitive>
 tags.%: "1"
 tags.Environment: "Demo"

+ azurerm_resource_group.demok8s
 id: <computed>
 location: "eastus"
 name: "k8sterraform"
 tags.%: <computed>


Plan: 2 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
maxime@Azure:~/aks-terrform$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
 + create

Terraform will perform the following actions:

+ azurerm_kubernetes_cluster.demok8s
 id: <computed>
 agent_pool_profile.#: "1"
 agent_pool_profile.0.count: "1"
 agent_pool_profile.0.dns_prefix: <computed>
 agent_pool_profile.0.fqdn: <computed>
 agent_pool_profile.0.name: "default"
 agent_pool_profile.0.os_type: "Linux"
 agent_pool_profile.0.vm_size: "Standard_A0"
 dns_prefix: "k8sterraform"
 kubernetes_version: "1.8.2"
 linux_profile.#: "1"
 linux_profile.0.admin_username: "acctestuser1"
 linux_profile.0.ssh_key.#: "1"
 linux_profile.0.ssh_key.0.key_data: "ssh-rsa AAAABCD"
 location: "eastus"
 name: "k8sterraform"
 resource_group_name: "k8sterraform"
 service_principal.#: "1"
 service_principal.2388863275.client_id: "0c1484fa-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
 service_principal.2388863275.client_secret: <sensitive>
 tags.%: "1"
 tags.Environment: "Demo"

+ azurerm_resource_group.demok8s
 id: <computed>
 location: "eastus"
 name: "k8sterraform"
 tags.%: <computed>


Plan: 2 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
 Terraform will perform the actions described above.
 Only 'yes' will be accepted to approve.

Enter a value: yes

azurerm_resource_group.demok8s: Creating...
 location: "" => "eastus"
 name: "" => "k8sterraform"
 tags.%: "" => "<computed>"
azurerm_resource_group.demok8s: Creation complete after 0s (ID: /subscriptions/7db5e03c-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/k8sterraform)
azurerm_kubernetes_cluster.demok8s: Creating...
 agent_pool_profile.#: "" => "1"
 agent_pool_profile.0.count: "" => "1"
 agent_pool_profile.0.dns_prefix: "" => "<computed>"
 agent_pool_profile.0.fqdn: "" => "<computed>"
 agent_pool_profile.0.name: "" => "default"
 agent_pool_profile.0.os_type: "" => "Linux"
 agent_pool_profile.0.vm_size: "" => "Standard_A0"
 dns_prefix: "" => "k8sterraform"
 kubernetes_version: "" => "1.8.2"
 linux_profile.#: "" => "1"
 linux_profile.0.admin_username: "" => "acctestuser1"
 linux_profile.0.ssh_key.#: "" => "1"
 linux_profile.0.ssh_key.0.key_data: "" => "ssh-rsa AAAABCDE"
 location: "" => "eastus"
 name: "" => "k8sterraform"
 resource_group_name: "" => "k8sterraform"
 service_principal.#: "" => "1"
 service_principal.2388863275.client_id: "" => "0c1484fa-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
 service_principal.2388863275.client_secret: "<sensitive>" => "<sensitive>"
 tags.%: "" => "1"
 tags.Environment: "" => "Demo"
azurerm_kubernetes_cluster.demok8s: Still creating... (10s elapsed)
azurerm_kubernetes_cluster.demok8s: Still creating... (20s elapsed)
azurerm_kubernetes_cluster.demok8s: Still creating... (30s elapsed)
...
azurerm_kubernetes_cluster.demok8s: Still creating... (9m0s elapsed)
azurerm_kubernetes_cluster.demok8s: Creation complete after 9m4s (ID: /subscriptions/7db5e03c-xxxx-xxxx-xxxx-...erService/managedClusters/k8sterraform)

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Documentation: https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html

 

OpenFaaS dans AKS

Salut!

Aujourd’hui nous allons voir ensemble comment déployer OpenFaaS dans AKS. Avant de rentrer dans le vif du sujet, je vais commencer par vous présenter OpenFaaS.

OpenFaaS est un framework permettant de construire des fonctions serverless dans Docker et Kubernetes.

Dans cet article nous partons du principe que nous avons un cluster AKS avec Helm installé.

Si ce n’est pas le cas je vous invite à consulter les articles suivants :

Sources du déploiement OpenFaaS:

maxime@Azure:~$ git clone https://github.com/openfaas/faas-netes
Cloning into 'faas-netes'...
remote: Counting objects: 3216, done.
remote: Total 3216 (delta 0), reused 0 (delta 0), pack-reused 3216
Receiving objects: 100% (3216/3216), 4.15 MiB | 0 bytes/s, done.
Resolving deltas: 100% (1607/1607), done.
Checking connectivity... done.
maxime@Azure:~$ cd faas-netes

Création de deux namespaces:

maxime@Azure:~$ kubectl create ns openfaas
namespace "openfaas" created

maxime@Azure:~$ kubectl create ns openfaas-fn
namespace "openfaas-fn" created

Déploiement d’OpenFaaS avec Helm:

maxime@Azure:~/faas-netes$ helm upgrade --install --namespace openfaas --set functionNamespace=openfaas-fn --set async=true --set rbac=false --set serviceType=LoadBalancer openfaas chart/openfaas
Release "openfaas" does not exist. Installing it now.
NAME: openfaas
LAST DEPLOYED: Tue Feb 27 17:42:38 2018
NAMESPACE: openfaas
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager ClusterIP 10.0.190.129 <none> 9093/TCP 2s
nats-external ClusterIP 10.0.151.160 <none> 4222/TCP 1s
faas-netesd-external ClusterIP 10.0.129.239 <none> 8080/TCP 1s
gateway-external LoadBalancer 10.0.253.8 <pending> 8080:30126/TCP 1s
prometheus-external LoadBalancer 10.0.225.72 <pending> 9090:30107/TCP 1s
alertmanager-external ClusterIP 10.0.159.185 <none> 9093/TCP 1s
faas-netesd ClusterIP 10.0.143.224 <none> 8080/TCP 1s
gateway ClusterIP 10.0.182.166 <none> 8080/TCP 1s
nats ClusterIP 10.0.140.189 <none> 4222/TCP 1s
prometheus ClusterIP 10.0.179.79 <none> 9090/TCP 1s

==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
alertmanager 1 1 1 0 1s
faas-netesd 1 1 1 0 1s
gateway 1 1 1 0 1s
nats 1 1 1 0 1s
prometheus 1 1 1 0 1s
queue-worker 1 0 0 0 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
alertmanager-3921060454-2tt43 0/1 ContainerCreating 0 1s
faas-netesd-3370362036-1pjlc 0/1 ContainerCreating 0 1s
gateway-1758360918-2tkjj 0/1 ContainerCreating 0 1s
nats-4109760169-ml6vr 0/1 ContainerCreating 0 1s
prometheus-1066188602-b47s6 0/1 ContainerCreating 0 1s
queue-worker-3116745235-p61kt 0/1 ContainerCreating 0 1s

==> v1/ConfigMap
NAME DATA AGE
prometheus-config 2 2s
alertmanager-config 1 2s

==> v1/ServiceAccount
NAME SECRETS AGE
faas-controller 1 2s
faas-controller 1 2s


NOTES:
To verify that openfaas has started, run:

kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"
maxime@Azure:~/faas-netes$ kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
alertmanager 1 1 1 1 1m
faas-netesd 1 1 1 1 1m
gateway 1 1 1 1 1m
nats 1 1 1 1 1m
prometheus 1 1 1 1 1m
queue-worker 1 1 1 1 1m

Configuration de l’authentification à OpenFaaS:

maxime@Azure:~/faas-netes$ kubectl -n openfaas create secret generic basic-auth \
> --from-literal=user=maxime \
> --from-literal=password=openfaaspwd
secret "basic-auth" created
maxime@Azure:~/faas-netes$ kubectl apply -f https://raw.githubusercontent.com/zigmax/openfaas-auth/master/openfaas-auth.yaml
configmap "caddy-config" created
deployment "caddy" created
service "caddy-lb" created
maxime@Azure:~/faas-netes$ kubectl -n openfaas describe service caddy-lb | grep Ingress | awk '{ print $NF }'
52.179.101.109

Déployer avec Helm dans AKS

Salut!

Aujourd’hui nous allons voir ensemble comment utiliser Helm avec un cluster AKS. Helm est un gestionnaire de package pour Kubernetes permettant de faire de la gestion de release, des rollbacks etc ..

Avec la commande helm init nous allons déployer Tiller: le composant server-side de helm sur notre cluster AKS.

maxime@Azure:~$ helm init
Creating /home/maxime/.helm
Creating /home/maxime/.helm/repository
Creating /home/maxime/.helm/repository/cache
Creating /home/maxime/.helm/repository/local
Creating /home/maxime/.helm/plugins
Creating /home/maxime/.helm/starters
Creating /home/maxime/.helm/cache/archive
Creating /home/maxime/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/maxime/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!

Il existe une multitude de packages (Chart) mis à disposition par la communauté. Vous pouvez les retrouver sur le github helm.

maxime@Azure:~$ helm search
NAME CHART VERSION APP VERSION DESCRIPTION
stable/acs-engine-autoscaler 2.1.2 2.1.1 Scales worker nodes within agent pools
stable/aerospike 0.1.6 v3.14.1.2 A Helm chart for Aerospike in Kubernetes
stable/anchore-engine 0.1.3 0.1.6 Anchore container analysis and policy evaluatio...
stable/artifactory 7.0.2 5.8.4 Universal Repository Manager supporting all maj...
stable/aws-cluster-autoscaler 0.3.2 Scales worker nodes within autoscaling groups.
stable/buildkite 0.2.1 3 Agent for Buildkite
stable/centrifugo 2.0.0 1.7.3 Centrifugo is a real-time messaging server.
stable/cert-manager 0.2.2 0.2.3 A Helm chart for cert-manager
stable/chaoskube 0.6.2 0.6.1 Chaoskube periodically kills random pods in you...
stable/chronograf 0.4.2 Open-source web application written in Go and R...
stable/cluster-autoscaler 0.4.2 1.1.0 Scales worker nodes within autoscaling groups.
stable/cockroachdb 0.6.2 1.1.4 CockroachDB is a scalable, survivable, strongly...
stable/concourse 0.11.5 3.8.0 Concourse is a simple and scalable CI system.
stable/consul 1.3.0 1.0.0 Highly available and distributed service discov...
stable/coredns 0.8.0 1.0.1 CoreDNS is a DNS server that chains plugins and...
stable/coscale 0.2.0 3.9.1 CoScale Agent
stable/dask-distributed 2.0.0 Distributed computation in Python
stable/datadog 0.10.9 DataDog Agent
stable/docker-registry 1.0.2 2.6.2 A Helm chart for Docker Registry
stable/dokuwiki 0.2.1 DokuWiki is a standards-compliant, simple to us...
stable/drupal 0.11.6 8.4.4 One of the most versatile open source content m...

Dans notre exemple, nous allons déployer un serveur Jenkins

maxime@Azure:~$ helm install stable/jenkins
NAME: wondering-turtle
LAST DEPLOYED: Thu Feb 22 01:16:24 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wondering-turtle-jenkins-agent ClusterIP 10.0.220.87 <none> 50000/TCP 1s
wondering-turtle-jenkins LoadBalancer 10.0.23.135 <pending> 8080:32590/TCP 1s

==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
wondering-turtle-jenkins 1 1 1 0 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
wondering-turtle-jenkins-2144157487-815j7 0/1 Pending 0 1s

==> v1/Secret
NAME TYPE DATA AGE
wondering-turtle-jenkins Opaque 2 1s

==> v1/ConfigMap
NAME DATA AGE
wondering-turtle-jenkins 3 1s
wondering-turtle-jenkins-tests 1 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
wondering-turtle-jenkins Pending default 1s
maxime@Azure:~$ printf $(kubectl get secret --namespace default wondering-turtle-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
B5rH2iWqAL
maxime@Azure:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 30m
wondering-turtle-jenkins LoadBalancer 10.0.23.135 52.226.17.49 8080:32590/TCP 4m
wondering-turtle-jenkins-agent ClusterIP 10.0.220.87 <none> 50000/TCP 4m

Nous listons les déploiements helm de notre cluster AKS

maxime@Azure:~$ helm list
NAME REVISION UPDATED STATUS CHART NAMESPACE
wondering-turtle 1 Thu Feb 22 01:16:24 2018 DEPLOYED jenkins-0.13.2 default

Nous pouvons désormais supprimer notre déploiement avec la commande helm delete

maxime@Azure:~$ helm delete wondering-turtle
release "wondering-turtle" deleted

Dans un prochain article nous verrons ensemble comment déployer OpenFaaS avec Helm.