Anil Paul

Web Developer – Designer

Kubernetes Basics

k8s Basic Components

  • pod which is our own app deployed docker container wrapped with k8s layer for instance there will be multiple application pods. Each pod gets its own private Internal IP Address.
  • service which has got static IP Address attachable to each pod so that it can communicate to the service outside for instance database / session / storage.
  • ingress There will be an external service it has got public ip.
  • ConfigMap - External configuration of the app.
  • Secret - base64 encoded configuration of the app.
  • Volumes - External / local / remote cloud mounted to keep persistent data for instance database. K8s doesn't manage data persistence.

service/ ingress has got permanent ip anf its a load balancer too.

K8s Deployment

deployment is for stateless apps
managing / orchestration of the pods is being done in the deployments.
most of the case we will be working with the deployment part. not with the pods.

K8s StatefulSet

Databases should create using StatefulSet because it has got state.
Its not easy.

Best option is to store database outside K8s cluster.

K8s Architecture

Master Node

Master node works different to Worker nodes. Master will make sure the communication between the pods and everything works.
4 Processes Running on every master node that control cluster state and worker nodes

  • Api Server - cluster gateway.
    • interact with the k8s through this api gateway using kubelet , k8s dashboard etc.
    • it is the gatekeeper for authentication - when ever we schedule new pods, deploy new applications, create new service etc - Scheduler - to schedule tasks
      • Scheduler
    • for instance api server got a request for scheduling a new pod, api server will validate it and forward to scheduler.
    • scheduler has intelligent way of decide in which node it should put that new pod
  • Controller Manager
    • will detect state changes like crashing pod(s) for instance, it will recover the cluster state asap. it will send request to scheduler
  • etcd
    • a key value store of a cluster state.
    • cluster changes will get updated in this key value store.
    • scheduler and control manager worked on the basis of etcd data.
    • note: etcd wont store application config settings.

Worker Node

Each Worker can have multiple pods on it.

Every node must have three processes

  • container runtime - Every node should have a container runtime. in most case it will be docker
  • kubelet interacts with node and the container runtime. kubelet starts the pod with container inside the node.
    • k-proxy (kube proxy) will forward services to pode(s) communication. k-proxy should be installed on every node.

communicating between the node(s) are using services which is basically load balancer.

Cluster Set Up

Master Nodes will have less resources compared to worker nodes. worker nodes which runs application need more resource.
in production usually 2 master nodes and 3 worker nodes.
To Add a new master/ node server

  • get a new bare server.
  • install all the master/ worker processes.
  • add it to the k8s cluster.

Infenetly increase master / worker nodes according to the need.

MiniKube & Kubectl

minikube

minikube is a one node k8s cluster with master processes and worker processes both run on one machine and docker container run time pre installed. it will be run through virtual box , hyper v or any other hypervisor. which we can use for testing purposes.

kubectl

kubectl is a command line tool for K8s cluster to interact with minikube.

  • the most powerful client to communicate with the K8s cluster through Api Server master processes.
  • worker processes can be initiated through the api server using kubectl because Api Server in the Master processes is the only entry point.

Note: kubectl is not only for minikube cluster, its using to communicate with the cloud cluster (production) as well.

Install minikube

Installation

Run Minikube

If docker installed better use docker drive like below.
minicube start --driver docker

to list the nodes run kubectl get nodes also we can check the minikube status by
minikube status

kubectl basic commands

debugging

Create Pods

  • pod is the smallest unit inside the the K8s cluster.
  • deployment is the abstraction layer over pods. basically we create pods using deployment.

For instance if we need to deploy a image to k8s cluster, we use the following syntax.

kubectl create deployment {deployment name} --image={image}

deployment name - name of the deployment
image - docker container image

behind the scene, there is another layer between deployment and pod which is automatically managed by K8s deployment called replicaset

kubectl get replicaset

Deployment manages replica set
Replicaset manages all the replicas of the pod
Pod is an abstraction of the container.

kubectl edit deployment nginx-depl

will give an auto generated configuration with default values.we can edit and it will deploy the pod for us.

kubectl describe pod {podname}

will give the status history of the pod

kubectl exec --it {podname} --bin/bash
can be used to login to the pod like we login to the docker container.

All the CRUD of the pods are being done using the deployment.

K8s YAML Configuration File

Manually managing /executing all the deployment and other tasks are not easy.
We can manage this through a K8s configuration file. to execute a configuration file we run the command

kubectl apply -f nginx-deployment.yaml

Example Deployment YAML file

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:

  • protocol: TCP
  • port: 80
  • targetPort: 8080

    `

    Ports in Service and Pod

Service has its port and Deployment has got its own port.

  • to get more details of the pod use

kubectl get pod -o wide

  • to ger the deployment in the YAML format

kubectl get deployment nginx-deployment -o yaml > nginx-deployment-result.yaml

K8s Name Spaces

namespaces are using to organise resources in a K8s Cluster.

There are four namespaces by default.

  • kubernetes-dashboard only with minikube.
  • kube-system system processes and master and kubectl processes will be on this name space.
  • kube-public contains publically accessable data. it has got a config map which contains cluster information we can see it using kubectl cluster-info
  • kube-node-lease holds information about heartbeats of nodes. it will determine the avialability of the node.
  • default namespace contains resources we create.

We can create new namespace using

kubectl create namespace {namespace name}

Another way is use a configuration YAML file.

use of the namespace is like programming, we can organize different types of services by name. for instance database, app etc.

Each namespace should have its own ConfigMap and Secret
Volume cant be bound to namespace

We can add to namespace while executing YAML file

kubectl apply -f mysql-configmap.yaml --namespace={namespace name}

Another way and best practice is inside the configuration file itself like

kubectx tool

kubectx allow to run kubens command to list all the namespaces.
kubens new-namespace will change the active name space from default to new-namespace.

K8s Ingress

Official Documentation

Ingress API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting. Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Ingress can provide external access too.

Terminology

  • Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
  • Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • Service: A Kubernetes Service that identifies a set of Pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.

Minimal Ingress Configuration

Ingress Rules

  • An optional host. In this example, no host is specified, so the rule applies to all inbound HTTP traffic through the IP address specified. If a host is provided (for example, foo.bar.com), the rules apply to that host.
  • A list of paths (for example, /testpath), each of which has an associated backend defined with a service.name and a service.port.name or service.port.number. Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
  • A backend is a combination of Service and port names as described in the Service doc or a custom resource backend by way of a CRD. HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.

An Ingress with no rules sends all traffic to a single default backend. The _defaultBackend__ is conventionally a configuration option of the Ingress controller and is not specified in your Ingress resources.If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is routed to your default backend.

We can set Wild Card Rules

host: "*.foo.com"

Ingress class

Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.

Configuring HTTS

Through Ingress YAML file we can do that.

A secret should be configured to achieve this.

Helm

  • Helm is the package manager for K8s. It uses to package YAML files and distribute.

  • bundle of that YAML files is called Helm Charts

Database Applications, Elastic Search, Monitoring Application like Prometeus has got avaialble Helm Chart.

There are public and private registeries for Helm Charts.

  • Template Engine.
    which is template engine for the YAML file. We can configure the values as dynamic so that easily manage the YAML files with this common blueprint. values can be replaced by placeholders.

a values.yaml will be there with the values and object named Values will be created based on that YAML file.
Also there is another option to set values thats through command line with --set flag

We can leverage the templating engine in the code deploy pipeline and replace the values according to the environments.

Helm Chart Structure

mychart is the name of the chart.

Chart.yaml contains the meta info about chart. name, version, dependencies etc.

values.yaml values for the template files. Default values which can be orverride.

charts folder will have the other chart dependencies.

template folder will have the template files.

helm install <chartname>

values.yaml sample

ways to override the default values

  • heml install --values=custom-values.yaml <chartname>

  • heml install --set version=2.0.0 <chartname>

Manjaro i3 monitor set up

Manjaro i3 Monitor Set up

manjaro i3 monitor set up script

  • run to determine output type run xrandr
  • write shell script and execute it sh monitor-setup.sh

in this example the Laptop screen in eDP1 which set as primary display using --primary flag.

inside i3 config we can set the workspace

Also in i3 we can configure the app to load to specific display

assign [class="jetbrains-phpstorm"] $ws1

Deploy BitBucket Code to AWS EC2 Instances using AWS CodeDeploy

Deploy your code from bitbucket to EC2 instance(s) on the AWS using AWS CodeDeploy is easy to set up.

  1. Create IAM Role with the relevant policies
  2. create new EC2 Instance by selecting the newly created IAM Role
  3. create new s3 bucket to push revision history
  4. Create CodeDeploy Application
  5. Create CodeDeply Group
  6. Add Environment Variables on the Bitbucket Repository
  7. Create bitbucket pipeline and code deploy script
  8. Create hook scripts to install dependencies and manage artifacts

 

  1. Create IAM Role with the relevant policies

    go to IAM -> policies -> create policy
    Create a policy with name 'CodeDeploy-EC2-Permissions' with the following json . you can prefix your company name for instance
    WebdaweCodeDeploy-EC2-Permissions if you want.

    Name the Role  CodeDeployRole or prefix your company name like WebdaweCodeDeployRole
    select  AWSCodeDeployRole AmazonS3FullAccess and the policy you just created (CodeDeploy-EC2-Permissions).

    And Trust Relationship should be the following json

    you can add the zone according to the zone which you want to use.

  2. create new EC2 Instance by selecting the newly created IAM Role

    you have to create the instance(s) to which the code is to be deployed with the relevant security groups and vpc ( which you may already set up)
    you need to make sure the IAM role is the one which you have created just now.

  3. create new S3 Bucket to push revision history

    you just want to create / you can reuse the bucket which you created already. It will work because we have given the full access permission in the IAM role for accessing S3.

  4. Create CodeDeploy Application

    You have to create a new CodeDeploy Application to deploy to EC2.

    you can name it 'CodeDeployApplication' or prefix it with your company name

  5. Create CodeDeply Group

    Create New Code Deploy Group inside the CodeDeployApplication and name it CodeDeployGroup. while we do this for real project better its the same name as the code branch ( example master / staging/ testing).

    And Select the Service Role which we have created CodeDeployRole
    Deployment Type should be In place

    Environment Configuration , choose Amazon EC2 instance and  add the tag(s) for instance(s) to which the code is to be deployed.
    Deployment settings should be oneAtATime.

  6. Add Environment Variables on the Bitbucket Repository

    you can set the following variables either as account variables or the repository variables according to the way your repository is been set up . if there is only one aws account you can set this in account level.

    you have to add repository variable which will be the CodeDeployApplication Name as APPLICATION_NAME . in our case it will be CodeDeployApplication

  7. Create bitbucket pipeline and code deploy script

    create the codedeploy_deploy.py script which is the edited version of
    this python script.
    what I have done here is to provided an option to pass the deployment group to which

     

    Now we have to create bitbucket_pipeline.yml accodingly.

    here what we are doing is

    adding branch wise logic for testing and deployment. so that we can set variables , files etc according to the branch and environment.
    for instance if you are deploying to staging the environment variables will be different.

    you can see the revision history inside the CodeDeploy Application in AWS Console.

  8. Create hook scripts to install dependencies and manage artifacts

    official AWS documentation explained it like below
    ApplicationStop – This deployment lifecycle event occurs even before the application revision is downloaded. You can specify scripts for this event to gracefully stop the application or remove currently installed packages in preparation of a deployment. The AppSpec file and scripts used for this deployment lifecycle event are from the previous successfully deployed application revision.

    Note

    An AppSpec file does not exist on an instance before you deploy to it. For this reason, the ApplicationStop hook does not run the first time you deploy to the instance. You can use the ApplicationStop hook the second time you deploy to an instance.

    To determine the location of the last successfully deployed application revision, the AWS CodeDeploy agent looks up the location listed in the deployment-group-id_last_successful_install file. This file is located in:

    /opt/codedeploy-agent/deployment-root/deployment-instructions folder on Amazon Linux, Ubuntu Server, and RHEL Amazon EC2 instances.

    C:\ProgramData\Amazon\CodeDeploy\deployment-instructions folder on Windows Server Amazon EC2 instances.

    To troubleshoot a deployment that fails during the ApplicationStop deployment lifecycle event, see Troubleshooting failed ApplicationStop, BeforeBlockTraffic, and AfterBlockTraffic deployment lifecycle events.

    DownloadBundle – During this deployment lifecycle event, the AWS CodeDeploy agent copies the application revision files to a temporary location.This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts.To troubleshoot a deployment that fails during the DownloadBundle deployment lifecycle event, see Troubleshooting a failed DownloadBundle deployment lifecycle event with "UnknownError: not opened for reading".
    BeforeInstall – You can use this deployment lifecycle event for preinstall tasks, such as decrypting files and creating a backup of the current version.
    Install – During this deployment lifecycle event, the AWS CodeDeploy agent copies the revision files from the temporary location to the final destination folder. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts.
    AfterInstall – You can use this deployment lifecycle event for tasks such as configuring your application or changing file permissions.
    ApplicationStart – You typically use this deployment lifecycle event to restart services that were stopped during ApplicationStop.
    ValidateService – This is the last deployment lifecycle event. It is used to verify the deployment was completed successfully.
    BeforeBlockTraffic – You can use this deployment lifecycle event to run tasks on instances before they are deregistered from a load balancer.To troubleshoot a deployment that fails during the BeforeBlockTraffic deployment lifecycle event, see Troubleshooting failed ApplicationStop, BeforeBlockTraffic, and AfterBlockTraffic deployment lifecycle events.
    BlockTraffic – During this deployment lifecycle event, internet traffic is blocked from accessing instances that are currently serving traffic. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts.
    AfterBlockTraffic – You can use this deployment lifecycle event to run tasks on instances after they are deregistered from a load balancer.To troubleshoot a deployment that fails during the AfterBlockTraffic deployment lifecycle event, see Troubleshooting failed ApplicationStop, BeforeBlockTraffic, and AfterBlockTraffic deployment lifecycle events.
    BeforeAllowTraffic – You can use this deployment lifecycle event to run tasks on instances before they are registered with a load balancer.
    AllowTraffic – During this deployment lifecycle event, internet traffic is allowed to access instances after a deployment. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts.
    AfterAllowTraffic – You can use this deployment lifecycle event to run tasks on instances after they are registered with a load balancer.

    Read More on AWS.
    So now how we hook these life cycle events ?
    you have to create a file called appspec.yml in the repository which will be the inventory for these scripts.


    so you have to create the relevant scripts in the folder scripts according to this example.
    files will copy the artifact to the destination folder which is /var/www/html/your-site-name according to the example appspec.