Advanced Kubernetes with DOKS
A complete guide
Cluster Provisioning
Nodes - 1. Master node - Manages workers
2. Worker nodes - Host applicationsMaster node
API servers
Schedulers - Where to spin up? Which node is best fit to spin up
etcd - key:value mapping of what is running whereWorker node
Kubelet - agent takes request from API server
Kube-Proxy - communication accross application container
cAdvisor-Takes metrics from container, pod &node lvl, during scalingNode - contains kubernetes (Kubelet,KubeProxy,cAdvisor,DNS will be running) and docker components
Node Pool - group of nodes of same configuration - contains nodes
Cluster - contains all node pools
- Provisioning a DOKS cluster
- Configuring a local machine or a remote management server to manage DigitalOcean Kubernetes cluster
cd ~/.kube/
cp {path}/prj-kubeconfig.yaml .
doctl kubernetes cluster kubeconfig save prj # choosing clusterkubectl get nodes
3. Node pools and Sizing Kubernetes worker nodes
4. Running a sample application
kubectl run --generator=run-pod/v1 nginx-pod-name --image=nginx --replicas=1
kubectl get pods
kubectl get all
Scalable Application
- Deploy web applications to Kubernetes cluster for easier scaling — ReplicaSet
kubectl descibe pod nginx-pod-name # pod has container or not
kubectl delete pod nginx-pod-name# We use replicaset to map the IP address of different pods running same application which wraps all pods# Create a yaml file as in below image
vim nginx-deployment.yaml
kubectl create -f nginx-deployment.yaml
kubectl get deploy
kubectl get all # say 2 pods are running
kubectl delete {pod-name}
# new one gets created as soon as one is deleted
kubectl get all # 2 pods will be running

2. Rolling out new versions and rollback of application servers
# change nginx-deployment.yaml
kubectl apply -f nginx-deployment.yamlkubectl get all # old+new 4 pods(2 running)# Rollout new version pods but traffic is not routed to them for timer of 60 secs. After 60 secs, older pods get deleted and traffic is routed to new podskubectl descibe deployment.apps/nginx-deployment# Rollback to older version - Blue-Green Deployment
kubectl rollout history deployment.apps/nginx-deployment
kubectl rollout undo deployment.apps/nginx-deployment --to-revision=1
3. Metric server for Scaling decisions and Cluster autoscaling using HPA
kubectl scale deployment nginx-deployment --replicas=5 # scale up
kubectl scale deployment nginx-deployment --replicas=1 # scale down

# Metric server collects all the numbers and send to HPA
# HPA needs resources to be mentioned in nginx-hpa.yaml filekubectl autoscale deployment nginx-hpa --min=1 --max=5 --cpu-percent=1kubectl get hpa# In a new tab, load more traffic; we run infinite loop
# go inside the pod
kubectl exec -it nginx-hpa-{generate-name} bash
root@nginx-hpa-{generate-name}# while true; do true; done; # switch to original tab
kubectl top pods
kubectl get pods # pods increase
kubectl get hpa
Persistent Storage
- Provision persistent storage in Kubernetes cluster — container reboots it loses the files in it
2. Exploring Storage Class, Persistent Volume Control, Persistent Volumes
3. Overview of Container Storage Interface (CSI)
4. Running StatefulSets
5. Achieving high availability of stateful workloads
Advanced Networking
- Load balancer, Ingress controllers and ingress resources
- Understanding when to use Nginx and when to use the cloud load balancer
- Networking plugins, CNI
Advanced Scheduling
- Node Affinity/Anti-Affinity
- Taints and Tolerations
- Pod Affinity/Anti-Affinity