Anil Paul

Web Developer – Designer

Spinnaker Basics

Spinnaker Basics

Implementing CI CD Pipeline using Cloud Build ( Google's Serverless CI/CD Platform) and Spinnaker an open source Continuous Delivery platform. It allows us to rapidly iterate and test the code base. High level steps for this pipeline as follows.

  • Developer changes code
  • Push to Repository with Tag

Cloud Build detects the new Tag -> Build Docker Image -> Run unit Tests -> Push the Docker Image to Artifact Registry.

Spinnaker detects the new image -> Deploy to Canary -> Functional Tests in the Canary Deployment -> Manual Approval -> Deploy to Production

Cloud Build Triggering

Create a Cloud watch Trigger to watch for any Commits with a git Tag Prefixed

We can configure in a way to look for specific branch / pull request of the repository.Which then build the Docker image and Push the Image to the Artifact Registry.

Spinnaker Triggering

Once Spinnaker detects the new image in the artifact registry, it will deploy to the scaled down staging environment for integration testing to minimize the failures.Once those tests passed, deploy to production will happen.

Azure Laravel CI Pipeline

CI YAML for laravel

first specify the branch to trigger build:

trigger:
  - master

pool and version set up:

pool:
  vmImage: ubuntu-latest

variables:
  phpVersion: 7.4
composer install
- script: |
      sudo update-alternatives --set php /usr/bin/php$(phpVersion)
      sudo update-alternatives --set phar /usr/bin/phar$(phpVersion)
      sudo update-alternatives --set phpdbg /usr/bin/phpdbg$(phpVersion)
      sudo update-alternatives --set php-cgi /usr/bin/php-cgi$(phpVersion)
      sudo update-alternatives --set phar.phar /usr/bin/phar.phar$(phpVersion)
      sudo a
      php -version
    displayName: 'Use PHP version $(phpVersion)'

  - script: composer install --no-interaction --prefer-dist
    displayName: 'composer install'
run tests
 - script: php artisan test
    displayName: 'php artisan test'
copy the env file securely stored in azure dev ops

Please refere azure documentation here

  - task: DownloadSecureFile@1
    displayName: 'Download .env file from secure file library'
    inputs:
      secureFile: .env
npm install
 - script: npm install
    displayName: 'npm install'
Archieve and publish Artifacts
  - task: ArchiveFiles@1
    displayName: 'Archive files'
    inputs:
      rootFolder: '$(System.DefaultWorkingDirectory)'
      includeRootFolder: false
      archiveType: zip

  - task: PublishBuildArtifacts@1
    displayName: 'Publish Artifact: drop'

All together

# PHP
# Test and package your PHP project.
# Add steps that run tests, save build artifacts, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/php

trigger:
  - master

pool:
  vmImage: ubuntu-latest

variables:
  phpVersion: 7.4

steps:
  - script: |
      sudo update-alternatives --set php /usr/bin/php$(phpVersion)
      sudo update-alternatives --set phar /usr/bin/phar$(phpVersion)
      sudo update-alternatives --set phpdbg /usr/bin/phpdbg$(phpVersion)
      sudo update-alternatives --set php-cgi /usr/bin/php-cgi$(phpVersion)
      sudo update-alternatives --set phar.phar /usr/bin/phar.phar$(phpVersion)
      sudo a
      php -version
    displayName: 'Use PHP version $(phpVersion)'

  - script: composer install --no-interaction --prefer-dist
    displayName: 'composer install'

  - script: php artisan test
    displayName: 'php artisan test'

  - task: DownloadSecureFile@1
    displayName: 'Download .env file from secure file library'
    inputs:
      secureFile: .env

  - script: sudo npm install -g npm@latest && npm install
    displayName: 'npm install'

  - task: ArchiveFiles@1
    displayName: 'Archive files'
    inputs:
      rootFolder: '$(System.DefaultWorkingDirectory)'
      includeRootFolder: false
      archiveType: zip

  - task: PublishBuildArtifacts@1
    displayName: 'Publish Artifact: drop'

Laravel Websockets Docker Development

Docker Image

webdawe/php-fpm:7.4

Laravel Websockets

LARAVEL_WEBSOCKETS_PORT=6004

nginx will route the port 80 websocket requests to 127.0.0.1:6004

Docker compose

version: '3'
services:
  my-app:
    image: webdawe/php-fpm:7.4
    hostname: my-app
    container_name: my-app
    dns: 8.8.8.8
    environment:
      CONTAINER_ROLE: app
      APP_ENV: local

    volumes:
      - ./:/var/www/html

    networks:
      - my-network
    ports:
      - 80:80
    tty: true

Kubernetes Basics

k8s Basic Components

  • pod which is our own app deployed docker container wrapped with k8s layer for instance there will be multiple application pods. Each pod gets its own private Internal IP Address.
  • service which has got static IP Address attachable to each pod so that it can communicate to the service outside for instance database / session / storage.
  • ingress There will be an external service it has got public ip.
  • ConfigMap - External configuration of the app.
  • Secret - base64 encoded configuration of the app.
  • Volumes - External / local / remote cloud mounted to keep persistent data for instance database. K8s doesn't manage data persistence.

service/ ingress has got permanent ip anf its a load balancer too.

K8s Deployment

deployment is for stateless apps
managing / orchestration of the pods is being done in the deployments.
most of the case we will be working with the deployment part. not with the pods.

K8s StatefulSet

Databases should create using StatefulSet because it has got state.
Its not easy.

Best option is to store database outside K8s cluster.

K8s Architecture

Master Node

Master node works different to Worker nodes. Master will make sure the communication between the pods and everything works.
4 Processes Running on every master node that control cluster state and worker nodes

  • Api Server - cluster gateway.
    • interact with the k8s through this api gateway using kubelet , k8s dashboard etc.
    • it is the gatekeeper for authentication - when ever we schedule new pods, deploy new applications, create new service etc - Scheduler - to schedule tasks
      • Scheduler
    • for instance api server got a request for scheduling a new pod, api server will validate it and forward to scheduler.
    • scheduler has intelligent way of decide in which node it should put that new pod
  • Controller Manager
    • will detect state changes like crashing pod(s) for instance, it will recover the cluster state asap. it will send request to scheduler
  • etcd
    • a key value store of a cluster state.
    • cluster changes will get updated in this key value store.
    • scheduler and control manager worked on the basis of etcd data.
    • note: etcd wont store application config settings.

Worker Node

Each Worker can have multiple pods on it.

Every node must have three processes

  • container runtime - Every node should have a container runtime. in most case it will be docker
  • kubelet interacts with node and the container runtime. kubelet starts the pod with container inside the node.
    • k-proxy (kube proxy) will forward services to pode(s) communication. k-proxy should be installed on every node.

communicating between the node(s) are using services which is basically load balancer.

Cluster Set Up

Master Nodes will have less resources compared to worker nodes. worker nodes which runs application need more resource.
in production usually 2 master nodes and 3 worker nodes.
To Add a new master/ node server

  • get a new bare server.
  • install all the master/ worker processes.
  • add it to the k8s cluster.

Infenetly increase master / worker nodes according to the need.

MiniKube & Kubectl

minikube

minikube is a one node k8s cluster with master processes and worker processes both run on one machine and docker container run time pre installed. it will be run through virtual box , hyper v or any other hypervisor. which we can use for testing purposes.

kubectl

kubectl is a command line tool for K8s cluster to interact with minikube.

  • the most powerful client to communicate with the K8s cluster through Api Server master processes.
  • worker processes can be initiated through the api server using kubectl because Api Server in the Master processes is the only entry point.

Note: kubectl is not only for minikube cluster, its using to communicate with the cloud cluster (production) as well.

Install minikube

Installation

Run Minikube

If docker installed better use docker drive like below.
minicube start --driver docker

to list the nodes run kubectl get nodes also we can check the minikube status by
minikube status

kubectl basic commands

kubectl get pod
kubectl get services
kubectl create deployment nginx-depl --image=nginx
kubectl get deployment
kubectl get replicaset
kubectl edit deployment nginx-depl
kubectl delete deployment nginx-depl

debugging

kubectl logs {pod-name}
kubectl exec -it {pod-name} -- bin/bash

Create Pods

  • pod is the smallest unit inside the the K8s cluster.
  • deployment is the abstraction layer over pods. basically we create pods using deployment.

For instance if we need to deploy a image to k8s cluster, we use the following syntax.

kubectl create deployment {deployment name} --image={image}

deployment name - name of the deployment
image - docker container image

PS C:\WINDOWS\system32> kubectl create deployment nginx-depl --image nginx
deployment.apps/nginx-depl created
kubectl get deployment
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
nginx-depl   0/1     1            0           10s
kubectl get pod
NAME                          READY   STATUS              RESTARTS   AGE
nginx-depl-5c8bf76b5b-spz79   0/1     ContainerCreating   0          29s

behind the scene, there is another layer between deployment and pod which is automatically managed by K8s deployment called replicaset

kubectl get replicaset

Deployment manages replica set
Replicaset manages all the replicas of the pod
Pod is an abstraction of the container.

kubectl edit deployment nginx-depl

will give an auto generated configuration with default values.we can edit and it will deploy the pod for us.

kubectl describe pod {podname}

will give the status history of the pod

kubectl exec --it {podname} --bin/bash
can be used to login to the pod like we login to the docker container.

All the CRUD of the pods are being done using the deployment.

K8s YAML Configuration File

Manually managing /executing all the deployment and other tasks are not easy.
We can manage this through a K8s configuration file. to execute a configuration file we run the command

kubectl apply -f nginx-deployment.yaml

Example Deployment YAML file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 8080

 ```

Once applied, we can edit the file and run the same command to update the pod according to the changes. _K8s_ will figure out the changes and do the needful.

There are mainly three parts in the configuration YAML file. 

 - _**metadata**_ of the components : name, kind

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx

  - _**spec**_ (specification) attributes will be specific to the kind of the component.

  - _**status**_ - which will be automatically generated by K8s. K8s will maintain status and according to that it manage. etcd in the master node store the status.

Best Practice to store the YAML file with the code and git versioned.  

The way the connection is established is using lables and selectors as you can see metadata has got labels and spec got selectors.

> Similar to deployment kind, there will be services kind as well. and service spec will have the selector which matches the label in the deployment. This is how K8s connect between pods and services. Service must know which ports it should register for the pod. 

_Example Service YAML file_

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:

  • protocol: TCP
  • port: 80
  • targetPort: 8080

    ```

    Ports in Service and Pod

Service has its port and Deployment has got its own port.

  • to get more details of the pod use

kubectl get pod -o wide

  • to ger the deployment in the YAML format

kubectl get deployment nginx-deployment -o yaml > nginx-deployment-result.yaml

K8s Name Spaces

namespaces are using to organise resources in a K8s Cluster.

There are four namespaces by default.

  • kubernetes-dashboard only with minikube.
  • kube-system system processes and master and kubectl processes will be on this name space.
  • kube-public contains publically accessable data. it has got a config map which contains cluster information we can see it using kubectl cluster-info
  • kube-node-lease holds information about heartbeats of nodes. it will determine the avialability of the node.
  • default namespace contains resources we create.

We can create new namespace using

kubectl create namespace {namespace name}

Another way is use a configuration YAML file.

use of the namespace is like programming, we can organize different types of services by name. for instance database, app etc.

Each namespace should have its own ConfigMap and Secret
Volume cant be bound to namespace

We can add to namespace while executing YAML file

kubectl apply -f mysql-configmap.yaml --namespace={namespace name}

Another way and best practice is inside the configuration file itself like

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-configmap
  namespace: {namespace}
data:
  db_url: mysql-service.database

kubectx tool

kubectx allow to run kubens command to list all the namespaces.
kubens new-namespace will change the active name space from default to new-namespace.

K8s Ingress

Official Documentation

Ingress API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting. Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Ingress can provide external access too.

Terminology

  • Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
  • Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
  • Service: A Kubernetes Service that identifies a set of Pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.

Minimal Ingress Configuration

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80

Ingress Rules

  • An optional host. In this example, no host is specified, so the rule applies to all inbound HTTP traffic through the IP address specified. If a host is provided (for example, foo.bar.com), the rules apply to that host.
  • A list of paths (for example, /testpath), each of which has an associated backend defined with a service.name and a service.port.name or service.port.number. Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.
  • A backend is a combination of Service and port names as described in the Service doc or a custom resource backend by way of a CRD. HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.

An Ingress with no rules sends all traffic to a single default backend. The _defaultBackend__ is conventionally a configuration option of the Ingress controller and is not specified in your Ingress resources.If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is routed to your default backend.

We can set Wild Card Rules

host: "*.foo.com"

Ingress class

Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: external-lb
spec:
  controller: example.com/ingress-controller
  parameters:
    apiGroup: k8s.example.com
    kind: IngressParameters
    name: external-lb

Configuring HTTS

Through Ingress YAML file we can do that.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - myapp.com
    secretName: myapp-secret-tls
  rules:

A secret should be configured to achieve this.

apiVersion: v1
kind: Secret
metadata:
  name: myapp-secret-tls
  namespace: default
data:
    tls.crt = base64 encoded cert
    tls.key = base64 encoded key
type: kubernetes.io/tls

Helm

  • Helm is the package manager for K8s. It uses to package YAML files and distribute.

  • bundle of that YAML files is called Helm Charts

Database Applications, Elastic Search, Monitoring Application like Prometeus has got avaialble Helm Chart.

There are public and private registeries for Helm Charts.

  • Template Engine.
    which is template engine for the YAML file. We can configure the values as dynamic so that easily manage the YAML files with this common blueprint. values can be replaced by placeholders.
apiVersion: v1
kind: Pod
metadata:
    name: {{ .Values.name }}
spec:
    containers:
    - name: {{ .Values.container.name}}
    - image: {{ .Values.container.image}
    }
    - port: {{ .Values.container.port}}

a values.yaml will be there with the values and object named Values will be created based on that YAML file.
Also there is another option to set values thats through command line with --set flag

We can leverage the templating engine in the code deploy pipeline and replace the values according to the environments.

Helm Chart Structure

mychart/
    Chart.yaml
    values.yaml
    charts/
    templates/
    ...

mychart is the name of the chart.

Chart.yaml contains the meta info about chart. name, version, dependencies etc.

values.yaml values for the template files. Default values which can be orverride.

charts folder will have the other chart dependencies.

template folder will have the template files.

helm install <chartname>

values.yaml sample

imageName: myapp
port: 8080
version: 1.0.0

ways to override the default values

  • heml install --values=custom-values.yaml <chartname>

  • heml install --set version=2.0.0 <chartname>

Manjaro i3 monitor set up

Manjaro i3 Monitor Set up

manjaro i3 monitor set up script

  • run to determine output type run xrandr
  • write shell script and execute it sh monitor-setup.sh
#!/bin/sh
/usr/bin/xrandr --output eDP1 --mode 1920x1080 --primary
/usr/bin/xrandr --output DP1 --mode 1920x1080 --right-of eDP1
/usr/bin/xrandr --output HDMI2 --mode 1920x1080 --right-of DP1

in this example the Laptop screen in eDP1 which set as primary display using --primary flag.

inside i3 config we can set the workspace

#workspace out put to monitors 
workspace $ws1 output DP1
workspace $ws4 output DP1
workspace $ws5 output DP1
workspace $ws2 output HDMI2
workspace $ws3 output HDMI2

Also in i3 we can configure the app to load to specific display

assign [class="jetbrains-phpstorm"] $ws1

Deploy BitBucket Code to AWS EC2 Instances using AWS CodeDeploy

Deploy your code from bitbucket to EC2 instance(s) on the AWS using AWS CodeDeploy is easy to set up.

  1. Create IAM Role with the relevant policies
  2. create new EC2 Instance by selecting the newly created IAM Role
  3. create new s3 bucket to push revision history
  4. Create CodeDeploy Application
  5. Create CodeDeply Group
  6. Add Environment Variables on the Bitbucket Repository
  7. Create bitbucket pipeline and code deploy script
  8. Create hook scripts to install dependencies and manage artifacts

 

  1. Create IAM Role with the relevant policies

    go to IAM -> policies -> create policy
    Create a policy with name 'CodeDeploy-EC2-Permissions' with the following json . you can prefix your company name for instance
    WebdaweCodeDeploy-EC2-Permissions if you want.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Action": [
                    "s3:Get*",
                    "s3:List*"
                ],
                "Effect": "Allow",
                "Resource": "*"
            }
        ]
    }

    Name the Role  CodeDeployRole or prefix your company name like WebdaweCodeDeployRole
    select  AWSCodeDeployRole AmazonS3FullAccess and the policy you just created (CodeDeploy-EC2-Permissions).

    And Trust Relationship should be the following json

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": [
              "codedeploy.amazonaws.com",
              "ec2.amazonaws.com",
              "codedeploy.ap-southeast-2.amazonaws.com"
            ]
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }

    you can add the zone according to the zone which you want to use.

  2. create new EC2 Instance by selecting the newly created IAM Role

    you have to create the instance(s) to which the code is to be deployed with the relevant security groups and vpc ( which you may already set up)
    you need to make sure the IAM role is the one which you have created just now.

  3. create new S3 Bucket to push revision history

    you just want to create / you can reuse the bucket which you created already. It will work because we have given the full access permission in the IAM role for accessing S3.

  4. Create CodeDeploy Application

    You have to create a new CodeDeploy Application to deploy to EC2.

    you can name it 'CodeDeployApplication' or prefix it with your company name

  5. Create CodeDeply Group

    Create New Code Deploy Group inside the CodeDeployApplication and name it CodeDeployGroup. while we do this for real project better its the same name as the code branch ( example master / staging/ testing).

    And Select the Service Role which we have created CodeDeployRole
    Deployment Type should be In place

    Environment Configuration , choose Amazon EC2 instance and  add the tag(s) for instance(s) to which the code is to be deployed.
    Deployment settings should be oneAtATime.

  6. Add Environment Variables on the Bitbucket Repository

    you can set the following variables either as account variables or the repository variables according to the way your repository is been set up . if there is only one aws account you can set this in account level.

    you have to add repository variable which will be the CodeDeployApplication Name as APPLICATION_NAME . in our case it will be CodeDeployApplication

  7. Create bitbucket pipeline and code deploy script

    create the codedeploy_deploy.py script which is the edited version of
    this python script.
    what I have done here is to provided an option to pass the deployment group to which

    """
    A BitBucket Builds template for deploying an application revision to AWS CodeDeploy
    deployment branch should be passed as argument example call - python codedeploy_deploy.py master
    """
    from __future__ import print_function
    import os
    import sys
    from time import strftime, sleep
    import boto3
    from botocore.exceptions import ClientError
    
    DEPLOYMENT_BRANCH = sys.argv[1]
    VERSION_LABEL = strftime("%Y%m%d%H%M%S")
    BUCKET_KEY = os.getenv('APPLICATION_NAME') + '/' + VERSION_LABEL + \
        '-app-bitbucket_builds.zip'
    
    def upload_to_s3(artifact):
        """
        Uploads an artifact to Amazon S3
        """
        
        print("Deployment Group:" + str(DEPLOYMENT_BRANCH))
        
        try:
            client = boto3.client('s3')
        except ClientError as err:
            print("Failed to create boto3 client.\n" + str(err))
            return False
        try:
            client.put_object(
                Body=open(artifact, 'rb'),
                Bucket=os.getenv('S3_BUCKET'),
                Key=BUCKET_KEY
            )
        except ClientError as err:
            print("Failed to upload artifact to S3.\n" + str(err))
            return False
        except IOError as err:
            print("Failed to access artifact.zip in this directory.\n" + str(err))
            return False
        return True
    
    def deploy_new_revision():
        """
        Deploy a new application revision to AWS CodeDeploy Deployment Group
        """
        try:
            client = boto3.client('codedeploy')
        except ClientError as err:
            print("Failed to create boto3 client.\n" + str(err))
            return False
    
        try:
            response = client.create_deployment(
                applicationName=str(os.getenv('APPLICATION_NAME')),
                deploymentGroupName=str(DEPLOYMENT_BRANCH),
                revision={
                    'revisionType': 'S3',
                    's3Location': {
                        'bucket': os.getenv('S3_BUCKET'),
                        'key': BUCKET_KEY,
                        'bundleType': 'zip'
                    }
                },
                deploymentConfigName=str(os.getenv('DEPLOYMENT_CONFIG')),
                description='New deployment from BitBucket',
                ignoreApplicationStopFailures=True
            )
        except ClientError as err:
            print("Failed to deploy application revision.\n" + str(err))
            return False
    
        """
        Wait for deployment to complete
        """
        while 1:
            try:
                deploymentResponse = client.get_deployment(
                    deploymentId=str(response['deploymentId'])
                )
                deploymentStatus=deploymentResponse['deploymentInfo']['status']
                if deploymentStatus == 'Succeeded':
                    print ("Deployment Succeeded")
                    return True
                elif (deploymentStatus == 'Failed') or (deploymentStatus == 'Stopped') :
                    print (deploymentStatus)
                    print ("Deployment Failed")
                    return False
                elif (deploymentStatus == 'InProgress') or (deploymentStatus == 'Queued') or (deploymentStatus == 'Created'):
                    continue
            except ClientError as err:
                print("Failed to deploy application revision.\n" + str(err))
                return False
        return True
    
    def main():
        if not upload_to_s3('/tmp/artifact.zip'):
            sys.exit(1)
        if not deploy_new_revision():
            sys.exit(1)
    
    if __name__ == "__main__":
        main()
    

     

    Now we have to create bitbucket_pipeline.yml accodingly.

    image: python:3.5.1
    
    pipelines:
      branches:
        master:
          - step:
              name: Test App
              script:
                - echo "Testing"
          - step:
              name: Deploy To Production
              trigger: manual
              script:
                - apt-get update
                - apt-get install -y zip
                - echo "install prerequisites"
                - pip install boto3==1.3.0
                - echo "Zip Artifacts.."
                - zip -r /tmp/artifact.zip *
                - python codedeploy_deploy.py CodeDeployGroup

    here what we are doing is

    adding branch wise logic for testing and deployment. so that we can set variables , files etc according to the branch and environment.
    for instance if you are deploying to staging the environment variables will be different.

    you can see the revision history inside the CodeDeploy Application in AWS Console.

  8. Create hook scripts to install dependencies and manage artifacts

    official AWS documentation explained it like below
    ApplicationStop – This deployment lifecycle event occurs even before the application revision is downloaded. You can specify scripts for this event to gracefully stop the application or remove currently installed packages in preparation of a deployment. The AppSpec file and scripts used for this deployment lifecycle event are from the previous successfully deployed application revision.

    Note

    An AppSpec file does not exist on an instance before you deploy to it. For this reason, the ApplicationStop hook does not run the first time you deploy to the instance. You can use the ApplicationStop hook the second time you deploy to an instance.

    To determine the location of the last successfully deployed application revision, the AWS CodeDeploy agent looks up the location listed in the deployment-group-id_last_successful_install file. This file is located in:

    /opt/codedeploy-agent/deployment-root/deployment-instructions folder on Amazon Linux, Ubuntu Server, and RHEL Amazon EC2 instances.

    C:\ProgramData\Amazon\CodeDeploy\deployment-instructions folder on Windows Server Amazon EC2 instances.

    To troubleshoot a deployment that fails during the ApplicationStop deployment lifecycle event, see Troubleshooting failed ApplicationStop, BeforeBlockTraffic, and AfterBlockTraffic deployment lifecycle events.

    DownloadBundle – During this deployment lifecycle event, the AWS CodeDeploy agent copies the application revision files to a temporary location.This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts.To troubleshoot a deployment that fails during the DownloadBundle deployment lifecycle event, see Troubleshooting a failed DownloadBundle deployment lifecycle event with "UnknownError: not opened for reading".
    BeforeInstall – You can use this deployment lifecycle event for preinstall tasks, such as decrypting files and creating a backup of the current version.
    Install – During this deployment lifecycle event, the AWS CodeDeploy agent copies the revision files from the temporary location to the final destination folder. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts.
    AfterInstall – You can use this deployment lifecycle event for tasks such as configuring your application or changing file permissions.
    ApplicationStart – You typically use this deployment lifecycle event to restart services that were stopped during ApplicationStop.
    ValidateService – This is the last deployment lifecycle event. It is used to verify the deployment was completed successfully.
    BeforeBlockTraffic – You can use this deployment lifecycle event to run tasks on instances before they are deregistered from a load balancer.To troubleshoot a deployment that fails during the BeforeBlockTraffic deployment lifecycle event, see Troubleshooting failed ApplicationStop, BeforeBlockTraffic, and AfterBlockTraffic deployment lifecycle events.
    BlockTraffic – During this deployment lifecycle event, internet traffic is blocked from accessing instances that are currently serving traffic. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts.
    AfterBlockTraffic – You can use this deployment lifecycle event to run tasks on instances after they are deregistered from a load balancer.To troubleshoot a deployment that fails during the AfterBlockTraffic deployment lifecycle event, see Troubleshooting failed ApplicationStop, BeforeBlockTraffic, and AfterBlockTraffic deployment lifecycle events.
    BeforeAllowTraffic – You can use this deployment lifecycle event to run tasks on instances before they are registered with a load balancer.
    AllowTraffic – During this deployment lifecycle event, internet traffic is allowed to access instances after a deployment. This event is reserved for the AWS CodeDeploy agent and cannot be used to run scripts.
    AfterAllowTraffic – You can use this deployment lifecycle event to run tasks on instances after they are registered with a load balancer.

    Read More on AWS.
    So now how we hook these life cycle events ?
    you have to create a file called appspec.yml in the repository which will be the inventory for these scripts.

    version: 0.0
    os: linux
    files:
      - source: /
        destination: /var/www/html/your-site-name
        owner: ec2-user
        mode: 777
    hooks:
       BeforeInstall:
         - location: scripts/installDependencies.sh
           runas: ec2-user
         - location: scripts/preDeploy.sh
           runas: ec2-user
       AfterInstall:
         - location: scripts/postDeploy.sh
           runas: ec2-user

    so you have to create the relevant scripts in the folder scripts according to this example.
    files will copy the artifact to the destination folder which is /var/www/html/your-site-name according to the example appspec.