Kubernetes (KB)

From Training Material
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.


Author


Environment preparation - local virtual machines

  • Change server-s1, server-s2 and server-s3 network configuration from DHCP into static IP (edit /etc/network/interfaces).
  • Verify the host names and IP addresses in /etc/hostname and /etc/hosts.
  • Remove SWAP partition from /etc/fstab
# on your client machine generate keys, then transfer public key to server-s1, server-s2, server-s2
$ ssh-keygen
$ ssh-copy-id student@server-s1
$ ssh-copy-id student@server-s2
$ ssh-copy-id student@server-s2
# check the connection
$ ssh student@server-s1


How to create a secure Kubernetes cluster?

Docker installation

sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update && apt-cache madison docker-ce
sudo apt-get install -y docker-ce=5:20.10.8~3-0~ubuntu-focal
sudo usermod -aG docker nobleprog
  • more info: docs.docker.com/engine/installation/linux/docker-ce/ubuntu/ and kubernetes.io/docs/setup/production-environment/container-runtimes/


Kubernetes installation

sudo apt-get update && sudo apt-get install -y apt-transport-https bash-completion curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo add-apt-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get update && sudo apt-get install -y kubeadm=1.21.5-00 kubectl=1.21.5-00 kubelet=1.21.5-00
source <(kubectl completion bash) && echo "source <(kubectl completion bash)" >> ~/.bashrc
source <(kubeadm completion bash) && echo "source <(kubeadm completion bash)" >> ~/.bashrc
  • more info: kubernetes.io/docs/setup/independent/create-cluster-kubeadm/


Cluster initialisation

  • initialise the master on server-s0 and configure kubectl
sudo kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • check the status of the master node and pods running in the kube-system namespace
kubectl get node,pod --namespace=kube-system
kubectl describe pod --namespace=kube-system kube-dns-86f...
  • start cluster networking and check the status once again
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
  • optionally allow pods to be scheduled on the master node (useful if you need a single-node Kubernetes cluster)
kubectl taint nodes --all node-role.kubernetes.io/master-
  • join server-s1 and server-s2 to the cluster as worker nodes (aka minions)
ssh ubuntu@server-s1
sudo kubeadm join --token bf37a8... 11.0.2.10:6443 --discovery-token-ca-cert-hash sha256:9dfed...
exit
ssh ubuntu@server-s2
sudo kubeadm join --token bf37a8... 11.0.2.10:6443 --discovery-token-ca-cert-hash sha256:9dfed...
exit
  • check the status of all nodes
kubectl get nodes
  • more info: kubernetes.io/docs/setup/independent/create-cluster-kubeadm/


Deploying sample applications in Kubernetes cluster

Deploy Training App

  • deploy an app by creating Deployment and Service resource, then check the status of both objects
apiVersion: apps/v1
kind: Deployment
metadata:
  name: training
  labels:
    app: training
spec:
  selector:
    matchLabels:
      app: training
  replicas: 3
  template:
    metadata:
      labels:
        app: training
    spec:
      containers:
      - name: training
        image: kamilbaran/training:app-v1
apiVersion: v1
kind: Service
metadata:
  name: training
  labels:
    app: training
spec:
  type: ClusterIP
  selector:
    app: training
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-app-base.yaml
kubectl get service,deployment,pod
  • run the following loop in the second terminal window to see the application responses
while :; 
do
  ((i++))
  curl --connect-timeout 2 $(kubectl get service -o go-template='{{(.spec.clusterIP)}}' training):80/?count=$i;
  sleep 1;
done
  • scale the deployment up to five pods by changing spec.replicas in training-app-base.yaml and re-applying the file or by running the following command
kubectl scale deployment --replicas=5 simple-app


Rolling update

  • imagine that you have the newer version of the application and you want to replace the current one
    • current one is based on training-app-base.yaml
    • new version is using image kamilbaran/training:app-v2
    • make sure that you have multiple replicas of the current version
  • update the deployment by changing .spec.template.spec.containers[0].image in training-app-base.yaml and re-applying the file or by running the following command
kubectl set image deployment/training training=kamilbaran/training:app-v2
  • before and after every step you can display all objects that are affected
kubectl get deployment,replicaset,pod -l app=training
  • run the commands below to check the current status
kubectl rollout status deployment training
  • to go back to the previous working configuration and check the status once again
kubectl rollout undo deployment training
  • Exercise: upgrade the app to image kamilbaran/training:app-v3


Blue-Green deployment

  • using B/G deployment in Kubernetes is easy and helpful when you need to upgrade many components of your app at once
  • to achieve this, you will need two deployments (training and training-new) and one service (training)


  • the general idea is as follows:
  1. create the current version that needs to be updated (use training-app-base.yaml as a starting point)
  2. before creating the second deployment (training-new) use labels and selectors to make sure that the service will use pods only from the first deployment (training)
  3. create the second deployment (training-new) and wait for all pods being available
  4. change the selectors in the service to use pods only from the second deployment (training-new)


  • blue/green deployment can be implemented in Kubernetes cluster as follows:
    • start the current/base version
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-app-base.yaml
    • before and after every step you can display all objects that are affected (keep an eye also at the results returned by the loop)
kubectl get service,deployment,pod -l app=training -L app -L release -o wide
    • before creating the second deployment (training-new) use labels and selectors to make sure that the service will use pods only from the first deployment (training)
    • step 1: add new label release=stable to deployment
    • step 2: change service selector to include the release label
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-app-bg-step1.yaml
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-app-bg-step2.yaml
    • step 3: create the second deployment training-new (it is using an image with the new version and two labels: app=training, release=new
kubectl apply -f http://www.kamilbaran.pl/download/kubernetes/training-app-bg-step3.yaml
    • now the service is using still only pods from training service
    • step 4: change the value of label release from stable to new to switch to the pods with a new version
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-app-bg-step4.yaml
    • at this point service is using the new version, but there are still two deployments and pods that are wasting resources
    • you should clean-up the cluster, and this is one of the possible ways:
    • step 5: update the training deployment to new version
    • step 6: switch the service to use the release=stable selector
    • step 7: delete or downscale training-new to zero and leave it for the next image update
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-app-bg-step5.yaml
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-app-bg-step6.yaml
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-app-bg-step7.yaml


Canary deployment

  • using Canary deployment in Kubernetes is useful if you want to try the new version on a selected subset of requests
  • for example, let's say that you want to send only 10% of all requests to the new version of your app and if everything is OK to increase the ratio from 9:1 to 7:3
  • to achieve this, you will need two deployments (training and training-new) and one service (training)


  • the general idea is as follows:
  1. create the current version that needs to be updated (use training-app-base.yaml as a starting point)
  2. before creating the second deployment (training-new) use labels and selectors to make sure that the service will use pods only from the first deployment (training)
  3. create the second deployment (training-new) and wait for all pods being available
  4. change the selectors in the service to use pods from both deployments in the same time
  5. modify the ratio by changing the number of pods in the first and second deployment (you will send 20% of requests to the new version if you start eight pods in the first deployment and only two pods in the second deployment)


  • Exercise: write Canary deployment step-by-step config files for the Training app (like in the B/G deployment presented above)


Deploy Wordpress

kubectl create secret generic mysql-pass --from-literal=password=NobleProg
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/kube-wp-volumes.yaml
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/kube-wp-mysql.yaml
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/kube-wp-wordpress.yaml
  • display all related objects to Wordpress
kubectl get secret,service,deployment,pod,pvc,pv
  • display logs of mysql container running in mysql-bcc89f687-hn2tl pod
kubectl logs mysql-bcc89f687-hn2tl mysql
  • get more details about the current state of wordpress-57955d6d4f-9qxvz pod (including pod related events)
kubectl describe pod wordpress-57955d6d4f-9qxvz
  • edit persistent volume to increase it's size
kubectl edit pv pv-1
  • delete pod with mysql to check if the app is able to persist the data
kubectl delete pod mysql-bcc89f687-hn2tl

Kubernetes objects

StatefulSet

  • deploy sample StatefulSet and related Service
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/kube-statefulset.yaml
kubectl get service,statefulset,pod -o wide -l example=StatefulSet
  • start a pod based on the alpine image (or attach to existing one) and install curl
kubectl run -i -t alpine --image=alpine --namespace=default
apk add --update curl
kubectl attach -i -t alpine-695c86655b-xfnzg
  • use curl to access simple-app pods
curl simple-app-0.simple-app
curl simple-app-0.simple-app.default.svc.cluster.local
curl simple-app.default.svc.cluster.local
  • remove everything related to StatefulSet example
kubectl delete all -l example=StatefulSet
  • Exercise: start the MongoDB Counter App using StatefulSet with three replicas


NetworkPolicy

  • deploy sample Training App and make sure there are no existing network policies
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-app-base.yaml
kubectl get svc,deploy,pod,networkpolicy
  • use wget to access deployed application from another container
kubectl run alpine -it --rm --image=alpine /bin/sh
wget --spider --timeout 1 training
  • apply network policy that will allow connections to Training App only from pods labeled trusted=yes
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-network-policy.yaml
  • try to access the application again, then change the label trusted=yes and try one more time
wget --spider --timeout 1 training
kubectl label pod --overwrite alpine-... trusted="yes"
wget --spider --timeout 1 training
  • more info: kubernetes.io/docs/concepts/services-networking/network-policies/


Ingress

  • deploy Nginx as Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/common/ns-and-sa.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/common/default-server-secret.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/common/nginx-config.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/rbac/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/daemon-set/nginx-ingress.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v1.3.0/install/service/nodeport.yaml (or /loadbalancer.yaml)
kubectl get all --namespace=nginx-ingress
  • deploy sample Training App and Ingress Resource and try to access the app
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-ingress-backend.yaml
kubectl apply -f https://www.kamilbaran.pl/training/kubernetes/training-ingress.yaml
kubectl get ingress,svc,deploy,pod
  • more info: kubernetes.io/docs/concepts/services-networking/ingress/


Helm

wget https://get.helm.sh/helm-v2.16.3-linux-386.tar.gz
tar -xvpf helm-v2.16.3-linux-386.tar.gz
sudo mv linux-386/helm /usr/local/bin/helm
kubectl apply -f rbac-config.yaml
helm init --service-account tiller
source <(helm completion bash) && echo "source <(helm completion bash)" >> ~/.bashrc
# rbac-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system