spec:
...
migration_controller: false
migration_ui: false
...
deprecated_cors_configuration: true (1)
You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.
You must install the same MTC version on all clusters. |
By default, the MTC web console and the Migration Controller
pod run on the target cluster. You can configure the Migration Controller
custom resource manifest to run the MTC web console and the Migration Controller
pod on a remote cluster.
After you have installed MTC, you must configure an object storage to use as a replication repository.
You can install the MTC Operator on OpenShift Container Platform 4 by using the OpenShift Container Platform web console.
You must be logged in as a user with cluster-admin
privileges on all clusters.
In the OpenShift Container Platform web console, click Operators → OperatorHub.
Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.
Select the Migration Toolkit for Containers Operator and click Install.
Do not change the subscription approval option to Automatic. The Migration Toolkit for Containers version must be the same on the source and the target clusters. |
Click Install.
On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.
Click Migration Toolkit for Containers Operator.
Under Provided APIs, locate the Migration Controller tile, and click Create Instance.
If you do not want to run the MTC web console and the Migration Controller
pod on the cluster, update the following parameters in the migration-controller
custom resource manifest:
spec:
...
migration_controller: false
migration_ui: false
...
deprecated_cors_configuration: true (1)
1 | This parameter is required only for OpenShift Container Platform 4.1. |
Click Create.
Click Workloads → Pods to verify that the MTC pods are running.
You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.
MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.
All clusters must have uninterrupted network access to the replication repository.
If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository.
MTC supports the following storage providers:
Multi-Cloud Object Gateway (MCG)
Amazon Web Services (AWS) S3
Google Cloud Platform (GCP)
Microsoft Azure Blob
Generic S3 object storage, for example, Minio or Ceph S3
You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
You can install the OpenShift Container Storage Operator from OperatorHub.
In the OpenShift Container Platform web console, click Operators → OperatorHub.
Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.
Select the OpenShift Container Storage Operator and click Install.
Select an Update Channel, Installation Mode, and Approval Strategy.
Click Install.
On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.
You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s custom resources (CRs).
Log in to the OpenShift Container Platform cluster:
$ oc login
Create the NooBaa
CR configuration file, noobaa.yml
, with the following content:
apiVersion: noobaa.io/v1alpha1
kind: NooBaa
metadata:
name: <noobaa>
namespace: openshift-storage
spec:
dbResources:
requests:
cpu: 0.5 (1)
memory: 1Gi
coreResources:
requests:
cpu: 0.5 (1)
memory: 1Gi
1 | For a very small cluster, you can change the value to 0.1 . |
Create the NooBaa
object:
$ oc create -f noobaa.yml
Create the BackingStore
CR configuration file, bs.yml
, with the following content:
apiVersion: noobaa.io/v1alpha1
kind: BackingStore
metadata:
finalizers:
- noobaa.io/finalizer
labels:
app: noobaa
name: <mcg_backing_store>
namespace: openshift-storage
spec:
pvPool:
numVolumes: 3 (1)
resources:
requests:
storage: <volume_size> (2)
storageClass: <storage_class> (3)
type: pv-pool
1 | Specify the number of volumes in the persistent volume pool. |
2 | Specify the size of the volumes, for example, 50Gi . |
3 | Specify the storage class, for example, gp2 . |
Create the BackingStore
object:
$ oc create -f bs.yml
Create the BucketClass
CR configuration file, bc.yml
, with the following content:
apiVersion: noobaa.io/v1alpha1
kind: BucketClass
metadata:
labels:
app: noobaa
name: <mcg_bucket_class>
namespace: openshift-storage
spec:
placementPolicy:
tiers:
- backingStores:
- <mcg_backing_store>
placement: Spread
Create the BucketClass
object:
$ oc create -f bc.yml
Create the ObjectBucketClaim
CR configuration file, obc.yml
, with the following content:
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: <bucket>
namespace: openshift-storage
spec:
bucketName: <bucket> (1)
storageClassName: <storage_class>
additionalConfig:
bucketclass: <mcg_bucket_class>
1 | Record the bucket name for adding the replication repository to the MTC web console. |
Create the ObjectBucketClaim
object:
$ oc create -f obc.yml
Watch the resource creation process to verify that the ObjectBucketClaim
status is Bound
:
$ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'
This process can take five to ten minutes.
Obtain and record the following values, which are required when you add the replication repository to the MTC web console:
S3 endpoint:
$ oc get route -n openshift-storage s3
S3 provider access key:
$ oc get secret -n openshift-storage migstorage \
-o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 --decode
S3 provider secret access key:
$ oc get secret -n openshift-storage migstorage \
-o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 --decode
You can configure an Amazon Web Services (AWS) S3 storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
The AWS S3 storage bucket must be accessible to the source and target clusters.
You must have the AWS CLI installed.
If you are using the snapshot copy method:
You must have access to EC2 Elastic Block Storage (EBS).
The source and target clusters must be in the same region.
The source and target clusters must have the same storage class.
The storage class must be compatible with snapshots.
Create an AWS S3 bucket:
$ aws s3api create-bucket \
--bucket <bucket> \ (1)
--region <bucket_region> (2)
1 | Specify your S3 bucket name. |
2 | Specify your S3 bucket region, for example, us-east-1 . |
Create the IAM user velero
:
$ aws iam create-user --user-name velero
Create an EC2 EBS snapshot policy:
$ cat > velero-ec2-snapshot-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVolumes",
"ec2:DescribeSnapshots",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot"
],
"Resource": "*"
}
]
}
EOF
Create an AWS S3 access policy for one or for all S3 buckets:
$ cat > velero-s3-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::<bucket>/*" (1)
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::<bucket>" (1)
]
}
]
}
EOF
1 | To grant access to a single S3 bucket, specify the bucket name. To grant access to all AWS S3 buckets, specify * instead of a bucket name as in the following example: |
"Resource": [
"arn:aws:s3:::*"
Attach the EC2 EBS policy to velero
:
$ aws iam put-user-policy \
--user-name velero \
--policy-name velero-ebs \
--policy-document file://velero-ec2-snapshot-policy.json
Attach the AWS S3 policy to velero
:
$ aws iam put-user-policy \
--user-name velero \
--policy-name velero-s3 \
--policy-document file://velero-s3-policy.json
Create an access key for velero
:
$ aws iam create-access-key --user-name velero
{
"AccessKey": {
"UserName": "velero",
"Status": "Active",
"CreateDate": "2017-07-31T22:24:41.576Z",
"SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, (1)
"AccessKeyId": <AWS_ACCESS_KEY_ID> (1)
}
}
1 | Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID for adding the AWS repository to the MTC web console. |
You can configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).
The GCP storage bucket must be accessible to the source and target clusters.
You must have gsutil
installed.
If you are using the snapshot copy method:
The source and target clusters must be in the same region.
The source and target clusters must have the same storage class.
The storage class must be compatible with snapshots.
Log in to gsutil
:
$ gsutil init
Welcome! This command will take you through the configuration of gcloud.
Your current configuration has been set to: [default]
To continue, you must login. Would you like to login (Y/n)?
Set the BUCKET
variable:
$ BUCKET=<bucket> (1)
1 | Specify your bucket name. |
Create a storage bucket:
$ gsutil mb gs://$BUCKET/
Set the PROJECT_ID
variable to your active project:
$ PROJECT_ID=`gcloud config get-value project`
Create a velero
IAM service account:
$ gcloud iam service-accounts create velero \
--display-name "Velero Storage"
Create the SERVICE_ACCOUNT_EMAIL
variable:
$ SERVICE_ACCOUNT_EMAIL=`gcloud iam service-accounts list \
--filter="displayName:Velero Storage" \
--format 'value(email)'`
Create the ROLE_PERMISSIONS
variable:
$ ROLE_PERMISSIONS=(
compute.disks.get
compute.disks.create
compute.disks.createSnapshot
compute.snapshots.get
compute.snapshots.create
compute.snapshots.useReadOnly
compute.snapshots.delete
compute.zones.get
)
Create the velero.server
custom role:
$ gcloud iam roles create velero.server \
--project $PROJECT_ID \
--title "Velero Server" \
--permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
Add IAM policy binding to the project:
$ gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role projects/$PROJECT_ID/roles/velero.server
Update the IAM service account:
$ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
Save the IAM service account keys to the credentials-velero
file in the current directory:
$ gcloud iam service-accounts keys create credentials-velero \
--iam-account $SERVICE_ACCOUNT_EMAIL
You can configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC).
You must have an Azure storage account.
You must have the Azure CLI installed.
The Azure Blob storage container must be accessible to the source and target clusters.
If you are using the snapshot copy method:
The source and target clusters must be in the same region.
The source and target clusters must have the same storage class.
The storage class must be compatible with snapshots.
Set the AZURE_RESOURCE_GROUP
variable:
$ AZURE_RESOURCE_GROUP=Velero_Backups
Create an Azure resource group:
$ az group create -n $AZURE_RESOURCE_GROUP --location <CentralUS> (1)
1 | Specify your location. |
Set the AZURE_STORAGE_ACCOUNT_ID
variable:
$ AZURE_STORAGE_ACCOUNT_ID=velerobackups
Create an Azure storage account:
$ az storage account create \
--name $AZURE_STORAGE_ACCOUNT_ID \
--resource-group $AZURE_RESOURCE_GROUP \
--sku Standard_GRS \
--encryption-services blob \
--https-only true \
--kind BlobStorage \
--access-tier Hot
Set the BLOB_CONTAINER
variable:
$ BLOB_CONTAINER=velero
Create an Azure Blob storage container:
$ az storage container create \
-n $BLOB_CONTAINER \
--public-access off \
--account-name $AZURE_STORAGE_ACCOUNT_ID
Create a service principal and credentials for velero
:
$ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \
AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \
AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv` \
AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
Save the service principal credentials in the credentials-velero
file:
$ cat << EOF > ./credentials-velero
AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
AZURE_TENANT_ID=${AZURE_TENANT_ID}
AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
AZURE_CLOUD_NAME=AzurePublicCloud
EOF