$ oc get all
Creating a backup of all relevant data involves exporting all important information, then restoring into a new project.
Because the |
List the project data to back up:
$ oc get all
NAME TYPE FROM LATEST bc/ruby-ex Source Git 1 NAME TYPE FROM STATUS STARTED DURATION builds/ruby-ex-1 Source Git@c457001 Complete 2 minutes ago 35s NAME DOCKER REPO TAGS UPDATED is/guestbook 10.111.255.221:5000/myproject/guestbook latest 2 minutes ago is/hello-openshift 10.111.255.221:5000/myproject/hello-openshift latest 2 minutes ago is/ruby-22-centos7 10.111.255.221:5000/myproject/ruby-22-centos7 latest 2 minutes ago is/ruby-ex 10.111.255.221:5000/myproject/ruby-ex latest 2 minutes ago NAME REVISION DESIRED CURRENT TRIGGERED BY dc/guestbook 1 1 1 config,image(guestbook:latest) dc/hello-openshift 1 1 1 config,image(hello-openshift:latest) dc/ruby-ex 1 1 1 config,image(ruby-ex:latest) NAME DESIRED CURRENT READY AGE rc/guestbook-1 1 1 1 2m rc/hello-openshift-1 1 1 1 2m rc/ruby-ex-1 1 1 1 2m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/guestbook 10.111.105.84 <none> 3000/TCP 2m svc/hello-openshift 10.111.230.24 <none> 8080/TCP,8888/TCP 2m svc/ruby-ex 10.111.232.117 <none> 8080/TCP 2m NAME READY STATUS RESTARTS AGE po/guestbook-1-c010g 1/1 Running 0 2m po/hello-openshift-1-4zw2q 1/1 Running 0 2m po/ruby-ex-1-build 0/1 Completed 0 2m po/ruby-ex-1-rxc74 1/1 Running 0 2m
Export the project objects to a project.yaml
file.
$ oc get -o yaml --export all > project.yaml
Export other objects in your project, such as role bindings, secrets, service accounts, and persistent volume claims.
You can export all namespaced objects in your project using the following command:
$ for object in $(oc api-resources --namespaced=true -o name) do oc get -o yaml --export $object > $object.yaml done
Note that some resources cannot be exported, and a MethodNotAllowed
error is displayed.
Some exported objects can rely on specific metadata or references to unique IDs in the project. This is a limitation on the usability of the recreated objects.
When using imagestreams
, the image
parameter of a deploymentconfig
can
point to a specific sha
checksum of an image in the internal registry that
would not exist in a restored environment. For instance, running the sample
"ruby-ex" as oc new-app
centos/ruby-22-centos7~https://github.com/sclorg/ruby-ex.git
creates an
imagestream
ruby-ex
using the internal registry to host the image:
$ oc get dc ruby-ex -o jsonpath="{.spec.template.spec.containers[].image}" 10.111.255.221:5000/myproject/ruby-ex@sha256:880c720b23c8d15a53b01db52f7abdcbb2280e03f686a5c8edfef1a2a7b21cee
If importing the deploymentconfig
as it is exported with oc get --export
it fails
if the image does not exist.
To restore a project, create the new project, then restore any exported files
by running oc create -f <file_name>
.
Create the project:
$ oc new-project <project_name> (1)
1 | This <project_name> value must match the name of the project that was backed up. |
Import the project objects:
$ oc create -f project.yaml
Import any other resources that you exported when backing up the project, such as role bindings, secrets, service accounts, and persistent volume claims:
$ oc create -f <object>.yaml
Some resources might fail to import if they require another object to exist. If this occurs, review the error message to identify which resources must be imported first.
Some resources, such as pods and default service accounts, can fail to be created. |
You can synchronize persistent data from inside of a container to a server.
Depending on the provider that is hosting the OpenShift Container Platform environment, the ability to launch third party snapshot services for backup and restore purposes also exists. As OpenShift Container Platform does not have the ability to launch these services, this guide does not describe these steps. |
Consult any product documentation for the correct backup procedures of specific applications. For example, copying the mysql data directory itself does not create a usable backup. Instead, run the specific backup procedures of the associated application and then synchronize any data. This includes using snapshot solutions provided by the OpenShift Container Platform hosting platform.
View the project and pods:
$ oc get pods NAME READY STATUS RESTARTS AGE demo-1-build 0/1 Completed 0 2h demo-2-fxx6d 1/1 Running 0 1h
Describe the desired pod to find the volumes that are currently used by a persistent volume:
$ oc describe pod demo-2-fxx6d Name: demo-2-fxx6d Namespace: test Security Policy: restricted Node: ip-10-20-6-20.ec2.internal/10.20.6.20 Start Time: Tue, 05 Dec 2017 12:54:34 -0500 Labels: app=demo deployment=demo-2 deploymentconfig=demo Status: Running IP: 172.16.12.5 Controllers: ReplicationController/demo-2 Containers: demo: Container ID: docker://201f3e55b373641eb36945d723e1e212ecab847311109b5cee1fd0109424217a Image: docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a Image ID: docker-pullable://docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a Port: 8080/TCP State: Running Started: Tue, 05 Dec 2017 12:54:52 -0500 Ready: True Restart Count: 0 Volume Mounts: */opt/app-root/src/uploaded from persistent-volume (rw)* /var/run/secrets/kubernetes.io/serviceaccount from default-token-8mmrk (ro) Environment Variables: <none> ...omitted...
This output shows that the persistent data is in the
/opt/app-root/src/uploaded
directory.
Copy the data locally:
$ oc rsync demo-2-fxx6d:/opt/app-root/src/uploaded ./demo-app receiving incremental file list uploaded/ uploaded/ocp_sop.txt uploaded/lost+found/ sent 38 bytes received 190 bytes 152.00 bytes/sec total size is 32 speedup is 0.14
The ocp_sop.txt
file is downloaded to the local system to be backed up
by backup software or another backup mechanism.
You can also use the previous steps if a pod starts without needing
to use a |
You can restore persistent volume claim (PVC) data that you backed up. You can delete the file and then place the file back in the expected location or migrate the persistent volume claims. You might migrate if you need to move the storage or in a disaster scenario when the backend storage no longer exists.
Consult any product documentation for the correct restoration procedures for specific applications.
Delete the file:
$ oc rsh demo-2-fxx6d sh-4.2$ ls */opt/app-root/src/uploaded/* lost+found ocp_sop.txt sh-4.2$ *rm -rf /opt/app-root/src/uploaded/ocp_sop.txt* sh-4.2$ *ls /opt/app-root/src/uploaded/* lost+found
Replace the file from the server that contains the rsync backup of the files that were in the pvc:
$ oc rsync uploaded demo-2-fxx6d:/opt/app-root/src/
Validate that the file is back on the pod by using oc rsh
to connect to the
pod and view the contents of the directory:
$ oc rsh demo-2-fxx6d sh-4.2$ *ls /opt/app-root/src/uploaded/* lost+found ocp_sop.txt
The following steps assume that a new pvc
has been created.
Overwrite the currently defined claim-name
:
$ oc set volume dc/demo --add --name=persistent-volume \ --type=persistentVolumeClaim --claim-name=filestore \ --mount-path=/opt/app-root/src/uploaded --overwrite
Validate that the pod is using the new PVC:
$ oc describe dc/demo Name: demo Namespace: test Created: 3 hours ago Labels: app=demo Annotations: openshift.io/generated-by=OpenShiftNewApp Latest Version: 3 Selector: app=demo,deploymentconfig=demo Replicas: 1 Triggers: Config, Image(demo@latest, auto=true) Strategy: Rolling Template: Labels: app=demo deploymentconfig=demo Annotations: openshift.io/container.demo.image.entrypoint=["container-entrypoint","/bin/sh","-c","$STI_SCRIPTS_PATH/usage"] openshift.io/generated-by=OpenShiftNewApp Containers: demo: Image: docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a Port: 8080/TCP Volume Mounts: /opt/app-root/src/uploaded from persistent-volume (rw) Environment Variables: <none> Volumes: persistent-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) *ClaimName: filestore* ReadOnly: false ...omitted...
Now that the deployment configuration uses the new pvc
, run oc
rsync
to place the files onto the new pvc
:
$ oc rsync uploaded demo-3-2b8gs:/opt/app-root/src/ sending incremental file list uploaded/ uploaded/ocp_sop.txt uploaded/lost+found/ sent 181 bytes received 39 bytes 146.67 bytes/sec total size is 32 speedup is 0.15
Validate that the file is back on the pod by using oc rsh
to connect to the
pod and view the contents of the directory:
$ oc rsh demo-3-2b8gs sh-4.2$ ls /opt/app-root/src/uploaded/ lost+found ocp_sop.txt
See the Pruning Resources topic for information about pruning collected data and older versions of objects.