There are several methods to deploy applications in OpenShift. This tutorial explores using the integrated Source-to-Image (S2I) builder. As mentioned in the OpenShift concepts section, S2I is a tool for building reproducible, Docker-formatted container images.
The following requirements must be completed before you can use this tutorial.
You have created a ROSA cluster.
Retrieve your login command
If you are not logged in via the CLI, in OpenShift Cluster Manager, click the dropdown arrow next to your name in the upper-right and select Copy Login Command.
A new tab opens. Enter your username and password, and select the authentication method.
Click Display Token
Copy the command under "Log in with this token".
Log in to the command line interface (CLI) by running the copied command in your terminal. You should see something similar to the following:
$ oc login --token=RYhFlXXXXXXXXXXXX --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443
Logged into "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided.
You don't have any projects. You can try to create a new project, by running
oc new-project <project name>
Create a new project from the CLI by running the following command:
$ oc new-project ostoy-s2i
The next section focuses on triggering automated builds based on changes to the source code. You must set up a GitHub webhook to trigger S2I builds when you push code into your GitHub repo. To set up the webhook, you must first fork the repo.
Replace |
Add secret to OpenShift
The example emulates a .env
file and shows how easy it is to move these directly into an OpenShift environment. Files can even be renamed in the Secret. In your CLI enter the following command, replacing <UserName>
with your GitHub username:
$ oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/secret.yaml
Add ConfigMap to OpenShift
The example emulates an HAProxy config file, and is typically used for overriding default configurations in an OpenShift application. Files can even be renamed in the ConfigMap.
In your CLI enter the following command, replacing <UserName>
with your GitHub username:
$ oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/configmap.yaml
Deploy the microservice
You must deploy the microservice first to ensure that the SERVICE environment variables are available from the UI application. --context-dir
is used here to only build the application defined in the microservice
directory in the git repository. Using the app
label allows us to ensure the UI application and microservice are both grouped in the OpenShift UI. Run the following command in the CLI to create the microservice, replacing <UserName>
with your GitHub username:
$ oc new-app https://github.com/<UserName>/ostoy \
--context-dir=microservice \
--name=ostoy-microservice \
--labels=app=ostoy
--> Creating resources with label app=ostoy ...
imagestream.image.openshift.io "ostoy-microservice" created
buildconfig.build.openshift.io "ostoy-microservice" created
deployment.apps "ostoy-microservice" created
service "ostoy-microservice" created
--> Success
Build scheduled, use 'oc logs -f buildconfig/ostoy-microservice' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/ostoy-microservice'
Run 'oc status' to view your app.
Check the status of the microservice
Before moving onto the next step we should be sure that the microservice was created and is running correctly by running the following command:
$ oc status
In project ostoy-s2i on server https://api.myrosacluster.g14t.p1.openshiftapps.com:6443
svc/ostoy-microservice - 172.30.47.74:8080
dc/ostoy-microservice deploys istag/ostoy-microservice:latest <-
bc/ostoy-microservice source builds https://github.com/UserName/ostoy on openshift/nodejs:14-ubi8
deployment #1 deployed 34 seconds ago - 1 pod
Wait until you see that it was successfully deployed. You can also check this through the web UI.
Deploy the front end UI
The application has been designed to rely on several environment variables to define external settings. Attach the previously created Secret and ConfigMap afterward, along with creating a PersistentVolume. Enter the following into the CLI:
$ oc new-app https://github.com/<UserName>/ostoy \
--env=MICROSERVICE_NAME=OSTOY_MICROSERVICE
--> Creating resources ...
imagestream.image.openshift.io "ostoy" created
buildconfig.build.openshift.io "ostoy" created
deployment.apps "ostoy" created
service "ostoy" created
--> Success
Build scheduled, use 'oc logs -f buildconfig/ostoy' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/ostoy'
Run 'oc status' to view your app.
Update the Deployment
Update the deployment to use a "Recreate" deployment strategy (as opposed to the default of RollingUpdate
) for consistent deployments with persistent volumes. Reasoning here is that the PV is backed by EBS and as such only supports the RWO
method. If the deployment is updated without all existing pods being killed it might not be able to schedule a new pod and create a PVC for the PV as it’s still bound to the existing pod. If you will be using EFS you do not have to change this.
$ oc patch deployment ostoy --type=json -p \
'[{"op": "replace", "path": "/spec/strategy/type", "value": "Recreate"}, {"op": "remove", "path": "/spec/strategy/rollingUpdate"}]'
Set a Liveness probe
Create a Liveness Probe on the Deployment to ensure the pod is restarted if something isn’t healthy within the application. Enter the following into the CLI:
$ oc set probe deployment ostoy --liveness --get-url=http://:8080/health
Attach Secret, ConfigMap, and PersistentVolume to Deployment
Run the following commands attach your secret, ConfigMap, and PersistentVolume:
Attach Secret
$ oc set volume deployment ostoy --add \
--secret-name=ostoy-secret \
--mount-path=/var/secret
Attach ConfigMap
$ oc set volume deployment ostoy --add \
--configmap-name=ostoy-config \
-m /var/config
Create and attach PersistentVolume
$ oc set volume deployment ostoy --add \
--type=pvc \
--claim-size=1G \
-m /var/demo_files
Expose the UI application as an OpenShift Route
Run the following command to deploy this as an HTTPS application that uses the included TLS wildcard certificates:
$ oc create route edge --service=ostoy --insecure-policy=Redirect
Browse to your application with the following methods:
Running the following command opens a web browser with your OSToy application:
$ python -m webbrowser "$(oc get route ostoy -o template --template='https://{{.spec.host}}')"
You can get the route for the application and copy and paste the route into your browser by running the following command:
$ oc get route