k8s Basic Components

  • pod which is our own app deployed docker container wrapped with k8s layer for instance there will be multiple application pods. Each pod gets its own private Internal IP Address.
  • service which has got static IP Address attachable to each pod so that it can communicate to the service outside for instance database / session / storage.
  • ingress There will be an external service it has got public ip.
  • ConfigMap - External configuration of the app.
  • Secret - base64 encoded configuration of the app.
  • Volumes - External / local / remote cloud mounted to keep persistent data for instance database. K8s doesn't manage data persistence.

service/ ingress has got permanent ip anf its a load balancer too.

K8s Deployment

deployment is for stateless apps
managing / orchestration of the pods is being done in the deployments.
most of the case we will be working with the deployment part. not with the pods.

K8s StatefulSet

Databases should create using StatefulSet because it has got state.
Its not easy.

Best option is to store database outside K8s cluster.

K8s Architecture

Master Node

Master node works different to Worker nodes. Master will make sure the communication between the pods and everything works.
4 Processes Running on every master node that control cluster state and worker nodes

  • Api Server - cluster gateway.
    • interact with the k8s through this api gateway using kubelet , k8s dashboard etc.
    • it is the gatekeeper for authentication - when ever we schedule new pods, deploy new applications, create new service etc - Scheduler - to schedule tasks
      • Scheduler
    • for instance api server got a request for scheduling a new pod, api server will validate it and forward to scheduler.
    • scheduler has intelligent way of decide in which node it should put that new pod
  • Controller Manager
    • will detect state changes like crashing pod(s) for instance, it will recover the cluster state asap. it will send request to scheduler
  • etcd
    • a key value store of a cluster state.
    • cluster changes will get updated in this key value store.
    • scheduler and control manager worked on the basis of etcd data.
    • note: etcd wont store application config settings.

Worker Node

Each Worker can have multiple pods on it.

Every node must have three processes

  • container runtime - Every node should have a container runtime. in most case it will be docker
  • kubelet interacts with node and the container runtime. kubelet starts the pod with container inside the node.
    • k-proxy (kube proxy) will forward services to pode(s) communication. k-proxy should be installed on every node.

communicating between the node(s) are using services which is basically load balancer.

Cluster Set Up

Master Nodes will have less resources compared to worker nodes. worker nodes which runs application need more resource.
in production usually 2 master nodes and 3 worker nodes.
To Add a new master/ node server

  • get a new bare server.
  • install all the master/ worker processes.
  • add it to the k8s cluster.

Infenetly increase master / worker nodes according to the need.

MiniKube & Kubectl

minikube

minikube is a one node k8s cluster with master processes and worker processes both run on one machine and docker container run time pre installed. it will be run through virtual box , hyper v or any other hypervisor. which we can use for testing purposes.

kubectl

kubectl is a command line tool for K8s cluster to interact with minikube.

  • the most powerful client to communicate with the K8s cluster through Api Server master processes.
  • worker processes can be initiated through the api server using kubectl because Api Server in the Master processes is the only entry point.

Note: kubectl is not only for minikube cluster, its using to communicate with the cloud cluster (production) as well.

Install minikube

Installation

Run Minikube

If docker installed better use docker drive like below.
minicube start --driver docker

to list the nodes run kubectl get nodes also we can check the minikube status by
minikube status

kubectl basic commands

kubectl get pod
kubectl get services
kubectl create deployment nginx-depl --image=nginx
kubectl get deployment
kubectl get replicaset
kubectl edit deployment nginx-depl
kubectl delete deployment nginx-depl

debugging

kubectl logs {pod-name}
kubectl exec -it {pod-name} -- bin/bash

Create Pods

  • pod is the smallest unit inside the the K8s cluster.
  • deployment is the abstraction layer over pods. basically we create pods using deployment.

For instance if we need to deploy a image to k8s cluster, we use the following syntax.

kubectl create deployment {deployment name} --image={image}

deployment name - name of the deployment
image - docker container image

PS C:\WINDOWS\system32> kubectl create deployment nginx-depl --image nginx
deployment.apps/nginx-depl created
kubectl get deployment
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
nginx-depl   0/1     1            0           10s
kubectl get pod
NAME                          READY   STATUS              RESTARTS   AGE
nginx-depl-5c8bf76b5b-spz79   0/1     ContainerCreating   0          29s

behind the scene, there is another layer between deployment and pod which is automatically managed by K8s deployment called replicaset

kubectl get replicaset

Deployment manages replica set
Replicaset manages all the replicas of the pod
Pod is an abstraction of the container.

kubectl edit deployment nginx-depl

will give an auto generated configuration with default values.we can edit and it will deploy the pod for us.

kubectl describe pod {podname}

will give the status history of the pod

kubectl exec --it {podname} --bin/bash
can be used to login to the pod like we login to the docker container.

All the CRUD of the pods are being done using the deployment.

K8s YAML Configuration File

Manually managing /executing all the deployment and other tasks are not easy.
We can manage this through a K8s configuration file. to execute a configuration file we run the command

kubectl apply -f nginx-deployment.yaml

Example Deployment YAML file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 8080

 ```

Once applied, we can edit the file and run the same command to update the pod according to the changes. _K8s_ will figure out the changes and do the needful.

There are mainly three parts in the configuration YAML file. 

 - _**metadata**_ of the components : name, kind

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx

  - _**spec**_ (specification) attributes will be specific to the kind of the component.

  - _**status**_ - which will be automatically generated by K8s. K8s will maintain status and according to that it manage. etcd in the master node store the status.

Best Practice to store the YAML file with the code and git versioned.  

The way the connection is established is using lables and selectors as you can see metadata has got labels and spec got selectors.

> Similar to deployment kind, there will be services kind as well. and service spec will have the selector which matches the label in the deployment. This is how K8s connect between pods and services. Service must know which ports it should register for the pod. 

_Example Service YAML file_

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:

  • protocol: TCP
  • port: 80
  • targetPort: 8080

    ```

    Ports in Service and Pod

Service has its port and Deployment has got its own port.

  • to get more details of the pod use

kubectl get pod -o wide

  • to ger the deployment in the YAML format

kubectl get deployment nginx-deployment -o yaml > nginx-deployment-result.yaml

K8s Name Spaces

namespaces are using to organise resources in a K8s Cluster.

There are four namespaces by default.

  • kubernetes-dashboard only with minikube.
  • kube-system system processes and master and kubectl processes will be on this name space.
  • kube-public contains publically accessable data. it has got a config map which contains cluster information we can see it using kubectl cluster-info
  • kube-node-lease holds information about heartbeats of nodes. it will determine the avialability of the node.
  • default namespace contains resources we create.

We can create new namespace using

kubectl create namespace {namespace name}

Another way is use a configuration YAML file.

use of the namespace is like programming, we can organize different types of services by name. for instance database, app etc.

Each namespace should have its own ConfigMap and Secret
Volume cant be bound to namespace

We can add to namespace while executing YAML file

kubectl apply -f mysql-configmap.yaml --namespace={namespace name}

Another way and best practice is inside the configuration file itself like

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-configmap
  namespace: {namespace}
data:
  db_url: mysql-service.database

kubectx tool

kubectx allow to run kubens command to list all the namespaces.
kubens new-namespace will change the active name space from default to new-namespace.

K8s Ingress

Official Documentation

Ingress API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting. Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Ingress can provide external access too.

Terminology

  • Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
  • Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • Service: A Kubernetes Service that identifies a set of Pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.

Minimal Ingress Configuration

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80

Ingress Rules

  • An optional host. In this example, no host is specified, so the rule applies to all inbound HTTP traffic through the IP address specified. If a host is provided (for example, foo.bar.com), the rules apply to that host.
  • A list of paths (for example, /testpath), each of which has an associated backend defined with a service.name and a service.port.name or service.port.number. Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
  • A backend is a combination of Service and port names as described in the Service doc or a custom resource backend by way of a CRD. HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.

An Ingress with no rules sends all traffic to a single default backend. The _defaultBackend__ is conventionally a configuration option of the Ingress controller and is not specified in your Ingress resources.If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is routed to your default backend.

We can set Wild Card Rules

host: "*.foo.com"

Ingress class

Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: external-lb
spec:
  controller: example.com/ingress-controller
  parameters:
    apiGroup: k8s.example.com
    kind: IngressParameters
    name: external-lb

Configuring HTTS

Through Ingress YAML file we can do that.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - myapp.com
    secretName: myapp-secret-tls
  rules:

A secret should be configured to achieve this.

apiVersion: v1
kind: Secret
metadata:
  name: myapp-secret-tls
  namespace: default
data:
    tls.crt = base64 encoded cert
    tls.key = base64 encoded key
type: kubernetes.io/tls

Helm

  • Helm is the package manager for K8s. It uses to package YAML files and distribute.

  • bundle of that YAML files is called Helm Charts

Database Applications, Elastic Search, Monitoring Application like Prometeus has got avaialble Helm Chart.

There are public and private registeries for Helm Charts.

  • Template Engine.
    which is template engine for the YAML file. We can configure the values as dynamic so that easily manage the YAML files with this common blueprint. values can be replaced by placeholders.
apiVersion: v1
kind: Pod
metadata:
    name: {{ .Values.name }}
spec:
    containers:
    - name: {{ .Values.container.name}}
    - image: {{ .Values.container.image}
    }
    - port: {{ .Values.container.port}}

a values.yaml will be there with the values and object named Values will be created based on that YAML file.
Also there is another option to set values thats through command line with --set flag

We can leverage the templating engine in the code deploy pipeline and replace the values according to the environments.

Helm Chart Structure

mychart/
    Chart.yaml
    values.yaml
    charts/
    templates/
    ...

mychart is the name of the chart.

Chart.yaml contains the meta info about chart. name, version, dependencies etc.

values.yaml values for the template files. Default values which can be orverride.

charts folder will have the other chart dependencies.

template folder will have the template files.

helm install <chartname>

values.yaml sample

imageName: myapp
port: 8080
version: 1.0.0

ways to override the default values

  • heml install --values=custom-values.yaml <chartname>

  • heml install --set version=2.0.0 <chartname>