Skip to content
Orhan

DO288 Red Hat OpenShift Development II Containerizing Applications

Tutorial, OpenShift, Containers, Kubernetes, K8s12 min read

1. Deploying and Managing Applications on OpenShift

Deploying Applications to OpenShift Cluster

  1. After OCP 4.5, oc new-app command creates Deployment instead of DeploymentConfig
  2. So we will use $ oc new-app --as-deployment-config during the codes below
  • Build application from the GitHub $ oc new-app --as-deployment-config [github link]

  • Repo contains Dockerfile, so use that instead of S2I $ oc new-app [github link] --as-deployment-config --strategy docker

  • Registry that contains docker image $ oc new-app -as-deployment-config --docker-image= [registry link]

  • Weird cases, what if repo contains dockerfile as well as index.php? OCP cannot decide in that case, you have to help

  • oc-new app flags

    • --code = github link
    • --docker-image = registry link for the container image
    • --strategy = options: source, docker, pipeline
    • --image-stream or -i = use the image in the ImageStream
    • --as-deployment-config = use DC instead of Deployment
  • image stream flag in detail: instead of public nginx, use rhel nginx in the image stream for example or directly deploy the image stream

  • strategy flag in detail

    • source means, use github and S2I image builder
    • docker means, use the Dockerfile in the github repo to build the image and follow my Dockerfile order
    • pipeline means use the Jenkinsfile to build the image and deployment
  • shortcuts for flags that will result in the same outcome $ oc new-app --as-deployment-config [FLAGS] *FLAGS COULD BE:

    • php~github.repo.link # this tells that use S2I php to build the repo
    • -i php github.repo.link
    • -i php:8.0 github.repo.link # this uses image version 8.0
  • oc new-app creates a BuildConfig, ImageStream, Deployment (DeploymentConfiguration with the flag), and a Service, you need to expose the service to create route

  • How to import docker images to image stream $ oc import-image [imagename] --configm --from [registry link] --insecure

  • oc new-app for subfolder of github, use --context-dir for the folder selection

  • how to rebuild an app, that doesn't have auto build trigger? $ oc start-build [buildconfig name]

Managing Applications with CLI

  • open remote shell in a container in the pod alpha $ oc rsh alpha

  • open a remote shell in pod alfa and run ls command $ oc rsh alpha ls

  • copy file into a running container, container must contain tar $ oc cp

  • edit a resource (oc get ... -o yaml + oc apply combo ) $ oc edit

CH 1 SUMMARY:

1$ oc new-app --as-deployment-config <git>#<folder> --context-dir <branchname>
2# if you do not have a dockerfile a suitable s2i will be used, unless you tell one from your imageStream, if you have dockerfile it will use it by default no need to define strategy in this case
3$ oc new-app --as-deployment-config --docker-image=registry.access.redhat.com/imagename
4$ oc new-app --as-deployment-config --image-stream=<equivalent-to-S2I-image> --strategy=<source/docker/pipeline> --code=<github.repo> OR --docker-image=<image-repo>
5$ oc new-app --as-deployment-config -l key=value --name value ENV_VAR_KEY=ENV_VAR_VALUE[NOFLAG]
6$ oc import-image imagename --config --from registry.link.here --insecure #this will import image to OCP registry
7$ oc logs bc/name #get the logs from the latest build
8$ oc logs build/my-app-2 # get the logs from the second build, from the same build config
9# oc logs myapp-build-2 # get the logs from a pod used to build the app second time
10$ oc status #to see new-app results
11$ oc start-build bc_name #starts rebuilding the application, so if you don't have triggers set up, you can manually re-run builds
12$ oc get templates -n openshift # careful, you need openshift namespace, that's where the default templates are
13# when troubleshooting, first curl the endpoint see if it is working
14# then check if pods can see the endpoints:
15$ oc get ep # get endpoints
16$ oc describe pod <podname> | grep IP # the ip should match service endpoint
17# if you are connecting to the db, ensure that credentials are correct
18# then check if one pod can see the other pod, normally you can go into a pod and ping it but if you don't have it, use curl so
19$ oc rsh [podname] bash -c 'echo >/dev/tc/$OTHER_SERVICE_NAME/PORT# && echo OK || echo FAIL'
20# When executing a command on a /dev/tcp/$host/$port pseudo-device file, Bash opens a TCP connection to the associated socket.
21# if you need to copy a configuration/sql table etc to the pod you can just copy it
22$ oc cp ~/path/to/file.txt pod-name:/path/to/file.txt #note if it is a file, destination is a file, if it is a folder, destination is a folder

2. Designing Containerized Applications for OpenShift

Aim: select a containerization method and package it to run on OCP

Building Container images with Dockerfile

Advanced Dockerfile Instructions

  • Concatenate individual and sequential RUN commands such as
1RUN yum --disablerepo=* --enablerepo="rhel-7-server-rpms"
2RUN yum update
3RUN yum install -y httpd
4RUN yum clean all -y

into

1RUN yum --disablerepo=* --enablerepo="rhel-7-server-rpms" && \
2RUN yum update && \
3RUN yum install -y httpd && \
4RUN yum clean all -y
  • LABEL instructions distinguish Openshift from Kubernetes by adding io.openshift to your labels

  • common labels, io.openshift.tags, io.k8s.description io.openshift.expose-services

  • io.openshift.expose-services: PORT[/PROTOCOL]:NAME 8080/tcp:my-deployment; this label contains list of svc ports that match the EXPOSE instructions in the Dockerfile

  • Dockerfile best practices for ENV and LABEL: single equal and \ separated

  • ENV MYSQL_DATABASE_USER="my_db_usr" \ MYSQL_DATABASE="mydb"

  • LABEL version="2.0" \ description="This is my db image"\ creationDate="

  • USER: that is an important concept. Red Hat recommends running the container as non-root. But OCP does not honour the USER on Dockerfile; it uses a random UID that is not 0 (root)

  • Building images with ONBUILD instruction. Use ONBUILD on parent container image. So when child refers to parent image via FROM, building the child image will trigger ONBUILD commands for the child container.

  • Why not just go ahead and use those commands in the child? For ease of consistency. The architect can create set of rules on the parent container, and devs can refer to the parent image instead.

  • Podman and Buildah does not use ONBUILD as it is not part of OCI spec.

Adapting Dockerfiles For OpenShift:

  • When you write a Dockerfile that builds an image to run on OCP, you need to address:

    • Directories/files that are read from/written to by processes in the container should be owned by the root group
    • Executable files should have executable permissions
    • Processes must not listen ion priviliged ports (below 1024)
  • so add the following to the dockerfile to deal with the first two

1RUN chgrp -R 0 [directory/here] && chmod -R g=u [directory/here]
  • By default OpenShift Container Platform runs containers using arbitrary User IDs. So the container user does not have Root USER priviliges

  • Note that Root User is different from Root Group. Root group does not have priviliged rights, it is only root User that has.

  • So for an image to support running as an arbitrary user, it must be part of Root Group.

  • How to change the port to run above 1024 on Dockerfile

1EXPOSE 8080
  • What if I am relying on parent Dockerfile, with ONBUILT?
1# On child dockerfile, i.e. httpd on port 80
2RUN sed -i "s/Listen 80/Listen 8080/g" /etc/httpd/conf/httpd.conf
  • How to change the user id?
1# Dont use a name, user number
2USER 1001
  • Running Containers as root Using Security Context Constraints (SCC)
  • [user@host ~]$ oc create serviceaccount myserviceaccount
  • [user@host ~]$ oc patch dc/demo-app --patch \

    '{"spec":{"template":{"spec":{"serviceAccountName": "myserviceaccount"}}}}'

  • [user@host ~]$ oc adm policy add-scc-to-user anyuid -z myserviceaccount

Guided Exercise: Building Container Images with Advanced Dockerfile instructions

Given the following Dockerfile:

1FROM registry.access.redhat.com/ubi8/ubi:8.0
2
3MAINTAINER Red Hat Training <training@redhat.com>
4
5# DocumentRoot for Apache
6ENV DOCROOT=/var/www/html
7
8RUN yum install -y --no-docs --disableplugin=subscription-manager httpd && \
9 yum clean all --disableplugin=subscription-manager -y && \
10 echo "Hello from the httpd-parent container!" > ${DOCROOT}/index.html
11
12# Allows child images to inject their own content into DocumentRoot
13ONBUILD COPY src/ ${DOCROOT}/
14
15EXPOSE 80
16
17# This stuff is needed to ensure a clean start
18RUN rm -rf /run/httpd && mkdir /run/httpd
19
20# Run as the root user
21USER root
22
23# Launch httpd
24CMD /usr/sbin/httpd -DFOREGROUND
  1. Can you create an Apache HTTP server container image using a Dockerfile and deploy it to OCP?
  2. Can you create a child contianer image by extending the parent Apache HTTP Server img?
  3. Can you update Dockerfile for the child container image so that it runs on OCP with a random user id?

Probably in the exam they would give me a parent container that will have a pre-built parent Dockerfile Probably the parent file will contain USER Root, won't be able to access system files because I haven't changed the owner, running on ports < 1024 Possibly may ask me to improve the build efficiency, by combining all RUN commands that are consecutive, as one

Commands I should know for EX288:

  1. On (child) Dockerfile, EXPOSE 8080 or ports > 1024
  2. On (child) Dockerfile, `LABEL io.openshift.expose-services="8080:http" to match Dockerfile and automatically understand which port to expose when creating a service (oc expose svc/[svc-name]
  3. By default, Apache httpd server listens on port 80 and configuration file is in /etc/httpd/conf/httpd.conf file, so how do you edit that in child?
  4. The answer is to edit it on the fly, so you need a script edit to convert httpd configuration that says Listen 80 to Listen 8080
  5. The way to do that on Linux is via sed, RUN sed -i "s/Listen 80/Listen 8080/g" /etc/httpd/conf/httpd.conf*
  6. if httpd cannot access the log files, it means that we do not have read/write rights, learn how to change permissions
  7. RUN chgrp -R 0 /var/log/httpd/ var/run/httpd && chmod -R g=u /var/log/httpd /var/run/httpd
  8. and finally USER 1001 or any user that starts from 1000

How to Inject Configuration Data into Application?

Typically developers can inject data into container via configuration file, environment variable or on the fly via command line. This section covers them all. Secrets have various types, service-account-token, basic-auth, ssh-auth, tls, opaque. Opaque does not perform any validation, and accepts any key/value pair If you mount secret as a volume, they are backed by temporary file storage facilities (tmpfs) and never stored on a node Secrets are namespace scoped

How to Create Secret / ConfigMaps

  • How to create configmap from literals $ oc create configmap config_map_name --from-literal key1=value1 --fron-literal key2=value2

  • How to create secret ? $ oc create secret generic secret_name --from-literal username=user1 --from-literal password=password1

  • How to create configmap from a file $ oc create configmap config_map_name --from-file /home/file.txt

  • Where file name is the key and values inside the file.txt is the value

  • ConfigMap can be shortened as cm

  • How to create a secret from file ? $ oc create secret generic my_secret --from-file /home/secret.txt

  • How to edit secret YAML instead of base64 encoding for the passwords/usernames Use 'stringValue' instead of 'data' for JSON key. See Below for the full example

1apiVersion: v1
2stringValue: #instead of data
3 username: user1
4 password: pass1
5kind: Secret
6metadata:
7 name: my_secret
8 type: Opaque
  • How to edit configmap on the fly?
  • oc edit cm/myconfigmap
  • oc patch cm/myconfigmap --patch '{"stringValue": {"key1": "value1"}}'

Injecting Secrets and Configuration to Pods

  • to inject as env var is oc set env

$ oc set env dc/mydcname --from configmap/myconfig

  • to inject secret as env

$ oc set env dc/mydcname --from secret/mysecret

  • to inject cm as volume

$ oc set volume dc/mydcname --add -t configmap -m /var/path --name myvol --configmap-name myconf

  • a lot to unpack, --add is for addition, -t is for type, -m mount path --name is volume name, --configmap-name is obvious

  • to inject secret as volume

$ oc set volume dc/mydcname --add -t secret -m /var/path --name myvol --secret-name mysecret

  • how to prevent dc to rollout for every configmap change (in case multiple provided back to back )

$ oc set triggers dc/mydcname --from-config --remove

  • how to start dc to rollout everytime a change is made

$ oc set triggers dc/mydcname --from-config

  • how to rollout the latest deployment, after configuration changes?

$ oc rollout latest mydcname

Application Configuration Options

  • Perhaps in the exam they will ask me to create a secret on the fly and inject it, but also create a configmap from a configuration file and inject it to the pod. While disabling the triggers, then rollout to the latest and re-set the trigger

Commands I should know for EX288:

  1. create a secret for an app on the fly
  2. $ oc create secret generic --help has the cheat sheet you may need
  3. $ oc create secret generic --from-literal key1=value1
  4. $ oc create secret generic --from-file=/path/to/file filenames in the folder become the key
  5. same as configmaps or cm, instead of secret generic, use configmap, the rest of the logic/applications are the same'
  6. set a volume on the fly and mount a secret
  7. $ oc set volume dc/myapp --add -t secret -m /path/to/folder 1--name myapp-volume --secret-name appsecret

3. Publishing enterprise Container Images

Goal: Interact with an enterprise registry Manage container images in registries (internal/external). Acccess OCP registry. Create ImageStream for images in external registries.

Managing Images in an Enterprise / Public Registry

Aim:

  • Copy image from one registry to another with Skopeo
  • Deploy the application image from a public or enterprise registry
    • To deploy the image you have to create docker pull secret

Commands I should know for EX288:

  1. $ podman login quay.io
  2. $ skopeo inspect --creds username:passord docker://registry.redhat.io/...
  3. $ read -p "PASSWORD" -s hiddenpassword && skopeo inspect --creds username:$hiddenpassword ...
  4. $ skopeo copy --dest-tls-verify=false containers-storage: myimage docker://registry.example.com/...
  5. $ skopeo copy --dest-tls-verify=false oci:/home/myimage/folder docker://registry.example.com/...
  6. `$ skopeo copy --src-creds=username:password --dest-creds=username:password docker://domain1/from docker://domain2/to
  7. $ skopeo delete docker://registry.example.com
  8. $ oc create secret docker-registry myregistrycreds --docker-server myserverurl --docker-username myuser --docker-password mypassword
  9. $ oc create secret generic myregistrycreds --from-file .dockerconfigjson=${XDG_RUNTIME_DIR}/containers/auth.json --type kubernetes.io/dockerconfigjson
  10. $ oc secrets link default myregistrycreds --for pull
  11. $ oc secrets link builder myregistrycreds
  12. $ oc create new-app --as-deployment-config --name sleep --docker-image quay.io/username/image:1.0

Let's begin:

  • Login to quay.io where we will store the image podman

    • $ podman login -u username quay.io
  • copy the oci image to quayio with skopeo

    • $ skopeo copy oci:/home/folder/path docker://quay.io/user/image:tag
  • inspect the image if you want or search for it or run it locally to ensure that it is running

    • $ skopeo inspect docker://quay.io/user/image:tag
    • $ podman search imagename
    • $ podman run imagename
  • how to pass password to the shell/terminal without showing in history?

    • $ read -p "PASSWORD: " -s password
    • $ skopeo inspect --creds developer1:$password docker://registry.redhat.io/rhscl/postgresql...
  • Deploy the image to OpenShift

    • login to OpenShift and create a new project
  • If you try to deploy the image from an external registry it will fail

    • $ oc new-app --docker-image quay.io/etc #THIS WILL FAIL BECAUSE OCP NEEDS CREDENTIALS
  • Create a secret

    • $ oc create secret generic secretname --from-file .dockerconfigjson=/path/to/auth.json --type kubernetes.io/dockerconfigjson
    • if you are using dockercfg as .dockercfg file instead of .docker/config.json file, then type is kubernetes.io/docercfg
  • Link the secret

    • $ oc secrets link default secretname --for-pull
  • Now create the application

    • $ oc new-app --as-deployment-config --name name --docker-image quay.io/image/name:tag

Allowing Access to OpenShift Registry

Aim:

  • Access the OCP Internal Registry with Linux container tools (docker/podman)

  • You need admin rights to expose the internal container registry

  • OpenShift Image Registry Operator manages the registry and settings are in the cluster config in openshift-image-registry namespace

  • To expose the registry

    • $ oc patch config cluster -n openshift-image-registry --type merge -p '{"spec":{"defaultRoute":true}}'
  • Get the route

    • $ oc get routes -n openshift-image-registry
  • login to docker / podman with ocp

    • $ HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')
    • $ podman/docker login -u $(oc whoami) -p $(oc whoami -t) $HOST
  • inspect with skopeo after podman/docker login

    • $ skopeo inspect --creds=$(oc whoami):$(oc whoami -t) docker://namespace-route-openshift.domain/namespace/app
  • copy image to ocp registry

    • $ skopeo copy --dest-creds=$(oc whoami):$(oc whoami -t) SOURCE-IMAGE DESTINATION-IMAGE

Creating Image Streams

  • list image streams

    • $ oc get is [-n namespace if you want to see imgs from other ns, -o name if you just want the names ]
  • get the is tag

    • $ oc get istag

Import Image from Other Repository into OCP Internal Registry

  • import from another registry

    • $ oc import-image myis[:tag] --confirm --from registry/image:tag
  • to update the image stream with the latest from the registry $ $ oc import-image myis[:tag] --confirm

  • to create image streams from other repos you need pull secrets

  1. so login with podman to the repo
  2. create generic secret
  3. import image
  • $ podman login -u username -p password registry.link
  • $ oc create secret generic registrytokenname --from-file .dockerconfigjson=/path --type kubernetes.io/dockerconfigjson
  • $ oc import-image myis:1.0 --confirm --from registry.link
  • to deploy the app via is
  • $ oc new-app --as-deployment-config -i myis:1.0

Sharing IS between namespaces

  1. Create a secret with an access token to the private registry only on the project where you create the image stream
  2. Configure that image steam with a local reference policy.
  3. Grant rights to use the image stream to service accounts from each project that uses the image steam.
  • $ podman login -u user -p password
  • $ oc new-project [myNewNamespaceThatWillImportImages]
  • $ oc new-project [shared]
  • $ oc create secret generic regtoken --from-file .dockerconfigjson=/path -t kubernetes.io/dockerconfigjson
  • $ oc import is [myis] --confirm --reference-policy local --from [internal.ocp.registry] # add reference policy
  • $ oc policy add-role-to-group system:image-puller system:serviceaccounts:[myNewNamespaceThatWillImportImages]
  • $ oc new-app --as-deployment-config -i shared/[myis]

Commands I should know for EX288:

  1. create a secret for an app on the fly
  2. $ oc create secret generic --help has the cheat sheet you may need
  3. $ oc create secret generic --from-literal key1=value1
  4. $ oc create secret generic --from-file=/path/to/file filenames in the folder become the key
  5. same as configmaps or cm, instead of secret generic, use configmap, the rest of the logic/applications are the same'
  6. set a volume on the fly and mount a secret
  7. $ oc set volume dc/myapp --add -t secret -m /path/to/folder 1--name myapp-volume --secret-name appsecret

4. Building Applications

4.1 Describe the OCP Build Process

Build Strategies in OCP

  • Source
  • Pipeline
  • Docker
  • Custom
  • Pipeline build creates a new Jenkins server, subsequent builds share the same jenkins server

  • Docker requires dockerfile, runs on buildah base img in a pod

  • Custom is custom

Build Input Sources

  • Dockerfile : self explanatory
  • Git : repo name
  • Image : if you are building from an img
  • Binary : Stream binary content from a local Filesystem to the builder
  • Input secrets
  • External artifacts : Allow copying binary files to the build
  • That said most common one is build with Dockerfile in Git repo

Build Config Resource

Build Config Sets triggers when the image should be rebuilt

4.2 Managing App Build

  • You got 2 options, oc new-app or YAML

Build Config using CLI

  • oc start-build
    • $ oc start-build name

4.5 Trigger Build

Defining Build Triggers

  • you got 2 options

    • img change
    • webhook
  • if the parent img changes, img change trigger

  • use webhook to trigger new builds once the code is changed

  • to add trigger to a build config use set triggers

  • to trigger if the parent img changes

    • $ oc set triggers bc/name --from-image=project/image:tag
  • to stop trigger just add --delete flag to the end from the trigger

  • for webhook trigger you MUST define a SECRET, with a KEY: 'WebHookSecretKey'

  • oc new-app creates git and generic secrets to use, just take the secrets and add to your VCS

  • if you want github/gitlab or bitbucket you can just add --from-...

    • oc set triggers bc/name --from-github
    • to remove just add --delete flag
  • to build images from source images, such as my custom php 7.0, you need to be able to pull the image and build it

  • service account builder does that for you, just need to add secret and link to it so the robot has the rights

    • $ oc create secret generic secret-name --from.file .dockerconfigjson=/path/ --type kubernetes.io/dockerconfigjson
    • $ oc secrets link builder secret-name
  • then import the image

    • $ oc import-image imgname --from quay.io/path --confirm
  • then create the new app, if you import-image to your internal registry with a new base, then it will re trigger build

4.7 Implementing Post-commit build hook

  • Once the app is built and img is deployed you can run post build hooks one example is to run a script post build
    • $ oc set build-hook bc/name --post-commit --script="curl http://slack.com/push/notification"

5. Customize S2I Build

5.1 Describe the S2I Arch

5.3 Cusomize the S2I Base Image

5.5 Create an S2I builder Img

6. Create Apps from OCP Template

6.1 Describe the Elements of a Template

  • OCP Template is the Helm Chart for Kubernetes

  • why use templates? Software Vendor to deploy all at once or multi tier application for testing environment (web/backend/db)

  • The only difference is: template uses 'objects' instead of 'spec'

  • In the template you can inject params too and a common label for all objects

  • also you can inject random default params/values too, see the YAML below

1apiVersion: template.openshift.io/v1
2kind: Template
3metadata:
4 name: mytemp
5 annotations:
6 description: "Descibe here"
7
8objects:
9- apiVersion: v1
10 kind: Pod
11 # does that look familiar?
12 spec:
13 containers:
14 #bla bla
15 name: ${MYPARAM}
16
17#params to be injected to the objects
18parameters:
19 - name: MYPARAM
20 value: delta
21 required: true
22 description: "describe the reason here"
23
24 - name: APIKEY
25 generate: expression
26 from: "[a-zA-Z0-9]{12}"
27 description: random string generated by the template at run time
28#labels for the resources
29labels:
30 mylabel: myapp

$ oc get templates -n openshift

  • to view the params for the template

$ oc process --parameters -n openshift nodejs-mongdb-template

6.3 Create a Multicontainer template

  • Question for this section could be create a multi container template from existing resources

  • use the following to export all objects as a template. Very Important!

    • $ oc get -o yaml --export secret, is,bc,dc,svc,route, pvc > mytemplate.yaml
  • order is important, secrets first don't wait for them, pvc last because it takes time to bind

  • To describe OpenShift resouce type use oc explain routes, to get nested level response oc explain routes.spec

  • oc new-app and oc process uses templates to create Applications from Templates;

  • to pass parameters to oc new-app add -p flag

    • $ oc new-app --file mytemplate.yaml -p PARAM1=value1 -p PARAM2=value2
  • to create a template with oc process with params, you need to send to a yaml file first then use oc create to deploy $ oc process -f mytemplate.yaml -p PARAM1=value1 > myresourcelist.yaml $ oc create -f myresourcelist.yaml

  • or your can just pipe it

    • $ oc process -f mytemplate.yaml -p PARAM1=value1 | oc create -f -
  • To find which params you need to pass to template

    • oc process -f mytemplate.yaml --parameters
  • OpenShift recommends oc new-app rather than oc process

Potential Exam Questions: Creating MultiContainer

  1. Create a template from a running application
  2. Clean the template to remove runtime information (manually delete info from YAML)
  3. Add parameters to a template "remember the -p flag?"

When you try to export multiple items but get an error, you can just add one by one and append to a file

$ oc get -o yaml is --export > /tmp/is.yaml $ cp /tmp/is.yaml /tmp/is-clean.yaml

TODO: COMPLETE THE GUIDED EXERCISE!

7. Managing App Deploy

7.1 Monitor the health

  • Liveliness and readiness

OpenShift Readiness and Liveliness Probes

  • Five options to check Liveliness and Readiness
  1. initialDelaySeconds [mandatory] How long to wait after the container starts before beginning the probe
  2. timeoutSeconds [mandatory] How long to wait for the probe to finish. If exceeded, probe fails
  3. periodSeconds How many times to check
  4. successThreshold : minimum consecutive success to be considered successful
  5. failureThreshold : ditto for failure

Methods to check

  • Three ways to check
  1. HTTP Checks
  2. Container Execution Check
  3. TCP Socket Check
  • HTTP Check
    1...
    2livelinessProbe:
    3 httpGet:
    4 path: /health
    5 port: 8080
    6 initialDelaySeconds: 10
    7 timeoutSeconds: 1
1* When to use HTTP Check: When you have a REST API and apps that can return HTTP Status code
2
3* Container Exec Check

... livelinessProbe: exec: command:

1- cat
2- /tmp/health

initialDelaySeconds: 10 timeoutSeconds: 1

1* When to run Container Exec Check: When container runs a process or shell script running in the container
2
3* TCP Socket Check

... livelinessProbe tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1

1* When to run TCP Socket Check: When you run deamons and open TCP ports: DB Servers, APP Servers, File Servers, Web Servers etc.
2
3
4* How to set probes on the UI? open up the YAML and edit
5
6* How to set probes on the CLI?
7 * $ oc set probe dc/myapp --readiness --get-url=http://:8080/healthz --period=20
8 * $ oc set probe dc/myapp --liveliness --open-tcp=3306 --period=20 --timeout-seconds=1
9 * $ oc set probe --help to learn more
10
11TODO: complete the demo
12
13## 7.3 Seelct Appropriate Deploy Strategy
14
15### Deployment Strategies
16
17* A deployment strategy is a method of changing or upgrading an app
18
19* Two Deployment Strategies: Deployment Configuration for the deployment strategy and other one is use Router to route to certain pods;
20
21 * For Deployment Strategy you can use Rolling or Recrate
22 * For Route AB Deployment, Blue-Green, N-1 Compatibility, Graceful Termination
23 * Let's first talk about Deployment Strategies then Router below
24
25* Deployment Strategies
26
27* This is the default if you do nothing
28
29* It uses readiness probes to see, and it is a canary deployment, deploy one to test, if doesn't work do not deploy
30
31* Recrate: When you use PVs with RWO access which does not allow writes from multiple pods, then you should use Recrate, sorry dude
32
33* Life-cycle hooks with deployment strategies:
34 * Both deployment strategies work with LC Hooks
35 * Pre/Mid/Post LC hooks are executed during the deployments
36 * Each Hook has failurePolicy that you can run when a hook fails. You can Abort/Retry or Ignore
37
38
39
40
41
42## 7.5 Manage App Deploy with CLI
43
44# 8. CI/CD Pipelines
45
46## 8.3 Implement Jenkins
47
48## 8.5 Custom Jenkins Pipeline
49
50# 9. Building Apps for Openshift
51
52## 9.1 Integrating External Services
53
54## 9.3 Containerize Svc
55
56## 9.5 Deploy Apps with OCP Runtimes
57
58# End
59
60
61
62export const _frontmatter = {"title":"DO288 Red Hat OpenShift Development II Containerizing Applications","date":"2021-01-15T00:00:00.000Z","description":"Summary notes for my Red Hat Certified Specialist in OpenShift Application Development EX180 Exam","tags":["Tutorial","OpenShift","Containers","Kubernetes","K8s"]}