×

Installing the Lifecycle Agent by using the CLI

You can use the OpenShift CLI (oc) to install the Lifecycle Agent.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure
  1. Create a Namespace object YAML file for the Lifecycle Agent, for example lcao-namespace.yaml:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-lifecycle-agent
      annotations:
        workload.openshift.io/allowed: management
    1. Create the Namespace CR by running the following command:

      $ oc create -f lcao-namespace.yaml
  2. Create an OperatorGroup object YAML file for the Lifecycle Agent, for example lcao-operatorgroup.yaml:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-lifecycle-agent
      namespace: openshift-lifecycle-agent
    spec:
      targetNamespaces:
      - openshift-lifecycle-agent
    1. Create the OperatorGroup CR by running the following command:

      $ oc create -f lcao-operatorgroup.yaml
  3. Create a Subscription CR, for example, lcao-subscription.yaml:

    apiVersion: operators.coreos.com/v1
    kind: Subscription
    metadata:
      name: openshift-lifecycle-agent-subscription
      namespace: openshift-lifecycle-agent
    spec:
      channel: "stable"
      name: lifecycle-agent
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    1. Create the Subscription CR by running the following command:

      $ oc create -f lcao-subscription.yaml
Verification
  1. To verify that the installation succeeded, inspect the CSV resource by running the following command:

    $ oc get csv -n openshift-lifecycle-agent
    Example output
    NAME                              DISPLAY                     VERSION               REPLACES                           PHASE
    lifecycle-agent.v4.16.0           Openshift Lifecycle Agent   4.16.0                Succeeded
  2. Verify that the Lifecycle Agent is up and running by running the following command:

    $ oc get deploy -n openshift-lifecycle-agent
    Example output
    NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
    lifecycle-agent-controller-manager   1/1     1            1           14s

Installing the Lifecycle Agent by using the web console

You can use the OpenShift Container Platform web console to install the Lifecycle Agent.

Prerequisites
  • Log in as a user with cluster-admin privileges.

Procedure
  1. In the OpenShift Container Platform web console, navigate to OperatorsOperatorHub.

  2. Search for the Lifecycle Agent from the list of available Operators, and then click Install.

  3. On the Install Operator page, under A specific namespace on the cluster select openshift-lifecycle-agent.

  4. Click Install.

Verification
  1. To confirm that the installation is successful:

    1. Click OperatorsInstalled Operators.

    2. Ensure that the Lifecycle Agent is listed in the openshift-lifecycle-agent project with a Status of InstallSucceeded.

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

If the Operator is not installed successfully:

  1. Click OperatorsInstalled Operators, and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.

  2. Click WorkloadsPods, and check the logs for pods in the openshift-lifecycle-agent project.

Installing the Lifecycle Agent with GitOps ZTP

Install the Lifecycle Agent with GitOps Zero Touch Provisioning (ZTP) to do an image-based upgrade.

Prerequisites
  • Create a directory called custom-crs in the source-crs directory. The source-crs directory must be in the same location as the kustomization.yaml file.

Procedure
  1. Create the following CRs in the openshift-lifecycle-agent namespace and push them to the source-crs/custom-crs directory:

    Example LcaSubscriptionNS.yaml file
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-lifecycle-agent
      annotations:
        workload.openshift.io/allowed: management
        ran.openshift.io/ztp-deploy-wave: "2"
      labels:
        kubernetes.io/metadata.name: openshift-lifecycle-agent
    Example LcaSubscriptionOperGroup.yaml file
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: lifecycle-agent-operatorgroup
      namespace: openshift-lifecycle-agent
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      targetNamespaces:
        - openshift-lifecycle-agent
    Example LcaSubscription.yaml file
    apiVersion: operators.coreos.com/v1
    kind: Subscription
    metadata:
      name: lifecycle-agent
      namespace: openshift-lifecycle-agent
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      channel: "stable"
      name: lifecycle-agent
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      installPlanApproval: Manual
      status:
        state: AtLatestKnown
    Example directory structure
    ├── kustomization.yaml
    ├── sno
    │   ├── example-cnf.yaml
    │   ├── common-ranGen.yaml
    │   ├── group-du-sno-ranGen.yaml
    │   ├── group-du-sno-validator-ranGen.yaml
    │   └── ns.yaml
    ├── source-crs
    │   ├── custom-crs
    │   │   ├── LcaSubscriptionNS.yaml
    │   │   ├── LcaSubscriptionOperGroup.yaml
    │   │   ├── LcaSubscription.yaml
  2. Add the CRs to your common PolicyGenTemplate:

    apiVersion: ran.openshift.io/v1
    kind: PolicyGenTemplate
    metadata:
      name: "example-common-latest"
      namespace: "ztp-common"
    spec:
      bindingRules:
        common: "true"
        du-profile: "latest"
      sourceFiles:
        - fileName: custom-crs/LcaSubscriptionNS.yaml
          policyName: "subscriptions-policy"
        - fileName: custom-crs/LcaSubscriptionOperGroup.yaml
          policyName: "subscriptions-policy"
        - fileName: custom-crs/LcaSubscription.yaml
          policyName: "subscriptions-policy"
    [...]

Installing and configuring the OADP Operator with GitOps ZTP

Install and configure the OADP Operator with GitOps ZTP before starting the upgrade.

Prerequisites
  • Create a directory called custom-crs in the source-crs directory. The source-crs directory must be in the same location as the kustomization.yaml file.

Procedure
  1. Create the following CRs in the openshift-adp namespace and push them to the source-crs/custom-crs directory:

    Example OadpSubscriptionNS.yaml file
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
      labels:
        kubernetes.io/metadata.name: openshift-adp
    Example OadpSubscriptionOperGroup.yaml file
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: redhat-oadp-operator
      namespace: openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      targetNamespaces:
      - openshift-adp
    Example OadpSubscription.yaml file
    apiVersion: operators.coreos.com/v1
    kind: Subscription
    metadata:
      name: redhat-oadp-operator
      namespace: openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    spec:
      channel: stable-1.3
      name: redhat-oadp-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      installPlanApproval: Manual
    status:
      state: AtLatestKnown
    Example OadpOperatorStatus.yaml file
    apiVersion: operators.coreos.com/v1
    kind: Operator
    metadata:
      name: redhat-oadp-operator.openshift-adp
      annotations:
        ran.openshift.io/ztp-deploy-wave: "2"
    status:
      components:
        refs:
        - kind: Subscription
          namespace: openshift-adp
          conditions:
          - type: CatalogSourcesUnhealthy
            status: "False"
        - kind: InstallPlan
          namespace: openshift-adp
          conditions:
          - type: Installed
            status: "True"
        - kind: ClusterServiceVersion
          namespace: openshift-adp
          conditions:
          - type: Succeeded
            status: "True"
            reason: InstallSucceeded
    Example directory structure
    ├── kustomization.yaml
    ├── sno
    │   ├── example-cnf.yaml
    │   ├── common-ranGen.yaml
    │   ├── group-du-sno-ranGen.yaml
    │   ├── group-du-sno-validator-ranGen.yaml
    │   └── ns.yaml
    ├── source-crs
    │   ├── custom-crs
    │   │   ├── OadpSubscriptionNS.yaml
    │   │   ├── OadpSubscriptionOperGroup.yaml
    │   │   ├── OadpSubscription.yaml
    │   │   ├── OadpOperatorStatus.yaml
  2. Add the CRs to your common PolicyGenTemplate:

    apiVersion: ran.openshift.io/v1
    kind: PolicyGenTemplate
    metadata:
      name: "example-common-latest"
      namespace: "ztp-common"
    spec:
      bindingRules:
        common: "true"
        du-profile: "latest"
      sourceFiles:
        - fileName: custom-crs/OadpSubscriptionNS.yaml
          policyName: "subscriptions-policy"
        - fileName: custom-crs/OadpSubscriptionOperGroup.yaml
          policyName: "subscriptions-policy"
        - fileName: custom-crs/OadpSubscription.yaml
          policyName: "subscriptions-policy"
        - fileName: custom-crs/OadpOperatorStatus.yaml
          policyName: "subscriptions-policy"
    [...]
  3. Create the DataProtectionApplication CR and the S3 secret:

    1. Create the following CRs in your source-crs/custom-crs directory:

      Example DataProtectionApplication.yaml file
      apiVersion: oadp.openshift.io/v1
      kind: DataProtectionApplication
      metadata:
        name: dataprotectionapplication
        namespace: openshift-adp
        annotations:
          ran.openshift.io/ztp-deploy-wave: "100"
      spec:
        configuration:
          restic:
            enable: false (1)
          velero:
            defaultPlugins:
              - aws
              - openshift
            resourceTimeout: 10m
        backupLocations:
          - velero:
              config:
                profile: "default"
                region: minio
                s3Url: $url
                insecureSkipTLSVerify: "true"
                s3ForcePathStyle: "true"
              provider: aws
              default: true
              credential:
                key: cloud
                name: cloud-credentials
              objectStorage:
                bucket: $bucketName (2)
                prefix: $prefixName (2)
      status:
        conditions:
        - reason: Complete
          status: "True"
          type: Reconciled
      1 The spec.configuration.restic.enable field must be set to false for an image-based upgrade because persistent volume contents are retained and reused after the upgrade.
      2 The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the RHACM hub template function, for example, prefix: {{hub .ManagedClusterName hub}}.
      Example OadpSecret.yaml file
      apiVersion: v1
      kind: Secret
      metadata:
        name: cloud-credentials
        namespace: openshift-adp
        annotations:
          ran.openshift.io/ztp-deploy-wave: "100"
      type: Opaque
      Example OadpBackupStorageLocationStatus.yaml file
      apiVersion: velero.io/v1
      kind: BackupStorageLocation
      metadata:
        namespace: openshift-adp
        annotations:
          ran.openshift.io/ztp-deploy-wave: "100"
      status:
        phase: Available

      The OadpBackupStorageLocationStatus.yaml CR verifies the availability of backup storage locations created by OADP.

    2. Add the CRs to your site PolicyGenTemplate with overrides:

      apiVersion: ran.openshift.io/v1
      kind: PolicyGenTemplate
      metadata:
        name: "example-cnf"
        namespace: "ztp-site"
      spec:
        bindingRules:
          sites: "example-cnf"
          du-profile: "latest"
        mcp: "master"
        sourceFiles:
          ...
          - fileName: custom-crs/OadpSecret.yaml
            policyName: "config-policy"
            data:
              cloud: <your_credentials> (1)
          - fileName: custom-crs/DataProtectionApplication.yaml
            policyName: "config-policy"
            spec:
              backupLocations:
                - velero:
                    config:
                      region: minio
                      s3Url: <your_S3_URL> (2)
                      profile: "default"
                      insecureSkipTLSVerify: "true"
                      s3ForcePathStyle: "true"
                    provider: aws
                    default: true
                    credential:
                      key: cloud
                      name: cloud-credentials
                    objectStorage:
                      bucket: <your_bucket_name> (3)
                      prefix: <cluster_name> (3)
          - fileName: custom-crs/OadpBackupStorageLocationStatus.yaml
            policyName: "config-policy"
      1 Specify your credentials for your S3 storage backend.
      2 Specify the URL for your S3-compatible bucket.
      3 The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the RHACM hub template function, for example, prefix: {{hub .ManagedClusterName hub}}.