Kubernetes (KB)

From Training Material
Jump to: navigation, search


Author


Environment preparation - local virtual machines

  • Change server-s0, server-s1 and server-s2 network configuration from DHCP into static IP (edit /etc/network/interfaces).
  • Verify the host names and IP addresses in /etc/hostname and /etc/hosts.
  • Remove SWAP partition from /etc/fstab
  • Make sure you can connect, from server-s0 to other machines server-s1 and server-s2 using public key authentication.
# on server-s0 generate keys, then transfer public key to server-s1 and server-s2
$ ssh-keygen
$ ssh-copy-id ubuntu@server-s1
$ ssh-copy-id ubuntu@server-s2
# check the connection
$ ssh ubuntu@server-s1
$ ssh ubuntu@server-s2


How to create a secure Kubernetes cluster?

Docker installation

sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update && apt-cache madison docker-ce
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
sudo usermod -aG docker ubuntu
  • more info: docs.docker.com/engine/installation/linux/docker-ce/ubuntu/


Kubernetes installation

sudo apt-get update && sudo apt-get install -y apt-transport-https bash-completion
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo add-apt-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get update && sudo apt-get install -y kubeadm=1.12.4-00 kubectl=1.12.4-00 kubelet=1.12.4-00
source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc
source <(kubeadm completion bash) && echo "source <(kubeadm completion bash)" >> ~/.bashrc
  • more info: kubernetes.io/docs/setup/independent/create-cluster-kubeadm/


Cluster initialisation

  • initialise the master on server-s0 and configure kubectl
sudo kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • check the status of the master node and pods running in the kube-system namespace
kubectl get node,pod --namespace=kube-system
kubectl describe pod --namespace=kube-system kube-dns-86f...
  • start cluster networking and check the status once again
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
  • optionally allow pods to be scheduled on the master node (useful if you need a single-node Kubernetes cluster)
kubectl taint nodes --all node-role.kubernetes.io/master-
  • join server-s1 and server-s2 to the cluster as worker nodes (aka minions)
ssh ubuntu@server-s1
sudo kubeadm join --token bf37a8... 11.0.2.10:6443 --discovery-token-ca-cert-hash sha256:9dfed...
exit
ssh ubuntu@server-s2
sudo kubeadm join --token bf37a8... 11.0.2.10:6443 --discovery-token-ca-cert-hash sha256:9dfed...
exit
  • check the status of all nodes
kubectl get nodes
  • more info: kubernetes.io/docs/setup/independent/create-cluster-kubeadm/


Cluster monitoring - Metrics Server

kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8+/auth-delegator.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8+/auth-reader.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8+/metrics-apiservice.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8+/metrics-server-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8+/metrics-server-service.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8+/resource-reader.yaml
  • more info: github.com/kubernetes-incubator/metrics-server


Kubernetes Dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl apply -f http://www.kamilbaran.pl/download/docker/kube-dashboard.yaml


Deploying sample applications in Kubernetes cluster

Deploy Training App

  • deploy an app by creating Deployment and Service objects, then check the status of both objects
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-base.yaml
kubectl get service,deployment,pod
  • run the following loop in the second terminal window to see the application responses
while :; 
do
  ((i++))
  curl --connect-timeout 2 $(kubectl get service -o go-template='{{(.spec.clusterIP)}}' training):80/?count=$i;
  sleep 1;
done
  • scale the deployment up to five pods by changing spec.replicas in training-app-base.yaml and re-applying the file or by running the following command
kubectl scale deployment --replicas=5 simple-app


Rolling update

  • imagine that you have the newer version of the application and you want to replace the current one
    • current one is based on training-app-base.yaml
    • new version is using image kamilbaran/nobleprog_training:training_app_v2
    • make sure that you have multiple replicas of the current version
  • update the deployment by changing .spec.template.spec.containers[0].image in training-app-base.yaml and re-applying the file or by running the following command
kubectl set image deployment/training training=kamilbaran/nobleprog_training:training_app_v2
  • before and after every step you can display all objects that are affected
kubectl get deployment,replicaset,pod -l app=training
  • run the commands below to check the current status
kubectl rollout status deployment training
  • to go back to the previous working configuration and check the status once again
kubectl rollout undo deployment training
  • Exercise: upgrade the app to image kamilbaran/nobleprog_training:training_app_v3


Blue-Green deployment

  • using B/G deployment in Kubernetes is easy and helpful when you need to upgrade many components of your app at once
  • to achieve this, you will need two deployments (training and training-new) and one service (training)


  • the general idea is as follows:
  1. create the current version that needs to be updated (use training-app-base.yaml as a starting point)
  2. before creating the second deployment (training-new) use labels and selectors to make sure that the service will use pods only from the first deployment (training)
  3. create the second deployment (training-new) and wait for all pods being available
  4. change the selectors in the service to use pods only from the second deployment (training-new)


  • blue/green deployment can be implemented in Kubernetes cluster as follows:
    • start the current/base version
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-base.yaml
    • before and after every step you can display all objects that are affected (keep an eye also at the results returned by the loop)
kubectl get service,deployment,pod -l app=training -L app -L release -o wide
    • before creating the second deployment (training-new) use labels and selectors to make sure that the service will use pods only from the first deployment (training)
    • step 1: add new label release=stable to deployment
    • step 2: change service selector to include the release label
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-bg-step1.yaml
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-bg-step2.yaml
    • step 3: create the second deployment training-new (it is using an image with the new version and two labels: app=training, release=new
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-bg-step3.yaml
    • now the service is using still only pods from training service
    • step 4: change the value of label release from stable to new to switch to the pods with a new version
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-bg-step4.yaml
    • at this point service is using the new version, but there are still two deployments and pods that are wasting resources
    • you should clean-up the cluster, and this is one of the possible ways:
    • step 5: update the training deployment to new version
    • step 6: switch the service to use the release=stable selector
    • step 7: delete or downscale training-new to zero and leave it for the next image update
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-bg-step5.yaml
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-bg-step6.yaml
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-bg-step7.yaml


Canary deployment

  • using Canary deployment in Kubernetes is useful if you want to try the new version on a selected subset of requests
  • for example, let's say that you want to send only 10% of all requests to the new version of your app and if everything is OK to increase the ratio from 9:1 to 7:3
  • to achieve this, you will need two deployments (training and training-new) and one service (training)


  • the general idea is as follows:
  1. create the current version that needs to be updated (use training-app-base.yaml as a starting point)
  2. before creating the second deployment (training-new) use labels and selectors to make sure that the service will use pods only from the first deployment (training)
  3. create the second deployment (training-new) and wait for all pods being available
  4. change the selectors in the service to use pods from both deployments in the same time
  5. modify the ratio by changing the number of pods in the first and second deployment (you will send 20% of requests to the new version if you start eight pods in the first deployment and only two pods in the second deployment)


  • Exercise: write Canary deployment step-by-step config files for the Training app (like in the B/G deployment presented above)


Deploy Wordpress

kubectl create secret generic mysql-pass --from-literal=password=NobleProg
kubectl apply -f http://www.kamilbaran.pl/download/docker/kube-wp-volumes.yaml
kubectl apply -f http://www.kamilbaran.pl/download/docker/kube-wp-mysql.yaml
kubectl apply -f http://www.kamilbaran.pl/download/docker/kube-wp-wordpress.yaml
  • display all related objects to Wordpress
kubectl get secret,service,deployment,pod,pvc,pv
  • display logs of mysql container running in mysql-bcc89f687-hn2tl pod
kubectl logs mysql-bcc89f687-hn2tl mysql
  • get more details about the current state of wordpress-57955d6d4f-9qxvz pod (including pod related events)
kubectl describe pod wordpress-57955d6d4f-9qxvz
  • edit persistent volume to increase it's size
kubectl edit pv pv-1
  • delete pod with mysql to check if the app is able to persist the data
kubectl delete pod mysql-bcc89f687-hn2tl
  • Exercise: start the MongoDB Counter App using images: kamilbaran/nobleprog_training:httpd and kamilbaran/nobleprog_training:mongo


Deploy MongoDB Counter App

Kubernetes objects

StatefulSet

  • deploy sample StatefulSet and related Service
kubectl apply -f http://www.kamilbaran.pl/download/docker/kube-statefulset.yaml
kubectl get service,statefulset,pod -o wide -l example=StatefulSet
  • start a pod based on the alpine image (or attach to existing one) and install curl
kubectl run -i -t alpine --image=alpine --namespace=default
apk add --update curl
kubectl attach -i -t alpine-695c86655b-xfnzg
  • use curl to access simple-app pods
curl simple-app-0.simple-app
curl simple-app-0.simple-app.default.svc.cluster.local
curl simple-app.default.svc.cluster.local
  • remove everything related to StatefulSet example
kubectl delete all -l example=StatefulSet
  • Exercise: start the MongoDB Counter App using StatefulSet with three replicas


NetworkPolicy

  • deploy sample Training App and make sure there are no existing network policies
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-base.yaml
kubectl get svc,deploy,pod,networkpolicy
  • use wget to access deployed application from another container
kubectl run alpine -it --rm --image=alpine /bin/sh
wget --spider --timeout 1 training
  • apply network policy that will allow connections to Training App only from pods labeled trusted=yes
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-network-policy.yaml
  • try to access the application again, then change the label trusted=yes and try one more time
wget --spider --timeout 1 training
kubectl label pod --overwrite alpine-... trusted="yes"
wget --spider --timeout 1 training
  • more info: kubernetes.io/docs/concepts/services-networking/network-policies/


Ingress

  • deploy Nginx as Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/common/ns-and-sa.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/common/default-server-secret.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/common/nginx-config.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/rbac/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/daemon-set/nginx-ingress.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/service/nodeport.yaml (or /loadbalancer.yaml)
kubectl get all --namespace=nginx-ingress
  • deploy sample Training App and Ingress Resource and try to access the app
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-ingress-backend.yaml
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-ingress.yaml
kubectl get ingress,svc,deploy,pod
  • more info: kubernetes.io/docs/concepts/services-networking/ingress/


Kubernetes in the cloud

Microsoft Azure Kubernetes Service (AKS)

  • enable Azure service providers
az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService
  • create a resource group
az group create --name training --location westeurope
  • create AKS cluster with 1 or more nodes
az aks create --resource-group training --name training-aks --kubernetes-version 1.11.8 --node-count 1 --node-vm-size Standard_B2s --generate-ssh-keys
  • after few minutes connect to the cluster and display nodes
az aks get-credentials --resource-group training --name training-aks
kubectl get nodes
  • now you are ready to deploy the application
  • the example below is Wordpress that is using
    • "default" storage class to provision two persistent volumes automatically (Azure also offers "premium" storage class)
    • service of type LoadBalancer for the front end component
kubectl apply -f http://www.kamilbaran.pl/download/docker/azure-wordpress-complete.yaml
  • use the command below to check if everything is up and running (this will also take few minutes)
watch -n1 kubectl get secret,service,deployment,pod,pvc,pv
  • get the external IP address of the web server and try it in the web browser
kubectl get service -o go-template='{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}' wp-wordpress
  • delete entire cluster when it's no longer needed
az group delete --name training --yes --no-wait