×

Topology Aware Lifecycle Manager supports partial Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps Zero Touch Provisioning (ZTP).

Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similiar configurations but with different values.

Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means you must create the objects referenced in the hub template in the same namespace where the policy is created.

Using PolicyGenTemplate CRs to manage and deploy polices to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs.

For more information about PolicyGenerator resources, see the RHACM Policy Generator documentation.

Using RHACM hub cluster templates in configuration policies

Topology Aware Lifecycle Manager supports partial Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps Zero Touch Provisioning (ZTP).

The following supported hub template functions are available for use in GitOps ZTP with TALM:

  • fromConfigmap returns the value of the provided data key in the named ConfigMap resource.

    There is a 1 MiB size limit for ConfigMap CRs. The effective size for ConfigMap CRs is further limited by the last-applied-configuration annotation. To avoid the last-applied-configuration limitation, add the following annotation to the template ConfigMap:

    argocd.argoproj.io/sync-options: Replace=true
  • base64enc returns the base64-encoded value of the input string

  • base64dec returns the decoded value of the base64-encoded input string

  • indent returns the input string with added indent spaces

  • autoindent returns the input string with added indent spaces based on the spacing used in the parent template

  • toInt casts and returns the integer value of the input value

  • toBool converts the input string into a boolean value, and returns the boolean

Various Open source community functions are also available for use with GitOps ZTP.

Example hub templates

The following code examples are valid hub templates. Each of these templates return values from the ConfigMap CR with the name test-config in the default namespace.

  • Returns the value with the key common-key:

    {{hub fromConfigMap "default" "test-config" "common-key" hub}}
  • Returns a string by using the concatenated value of the .ManagedClusterName field and the string -name:

    {{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) hub}}
  • Casts and returns a boolean value from the concatenated value of the .ManagedClusterName field and the string -name:

    {{hub fromConfigMap "default" "test-config" (printf "%s-name" .ManagedClusterName) | toBool hub}}
  • Casts and returns an integer value from the concatenated value of the .ManagedClusterName field and the string -name:

    {{hub (printf "%s-name" .ManagedClusterName) | fromConfigMap "default" "test-config" | toInt hub}}

Specifying group and site configurations in group PolicyGenerator or PolicyGentemplate CRs

You can manage the configuration of fleets of clusters with ConfigMap CRs by using hub templates to populate the group and site values in the generated policies that get applied to the managed clusters. Using hub templates in site PolicyGenerator or PolicyGentemplate CRs means that you do not need to create a policy CR for each site.

You can group the clusters in a fleet in various categories, depending on the use case, for example hardware type or region. Each cluster should have a label corresponding to the group or groups that the cluster is in. If you manage the configuration values for each group in different ConfigMap CRs, then you require only one group policy CR to apply the changes to all the clusters in the group by using hub templates.

The following example shows you how to use three ConfigMap CRs and one PolicyGenerator CR to apply both site and group configuration to clusters grouped by hardware type and region.

When you use the fromConfigmap function, the printf variable is only available for the template resource data key fields. You cannot use it with name and namespace fields.

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in to the hub cluster as a user with cluster-admin privileges.

  • You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application.

Procedure
  1. Create three ConfigMap CRs that contain the group and site configuration:

    1. Create a ConfigMap CR named group-hardware-types-configmap to hold the hardware-specific configuration. For example:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: group-hardware-types-configmap
        namespace: ztp-group
        annotations:
          argocd.argoproj.io/sync-options: Replace=true (1)
      data:
        # SriovNetworkNodePolicy.yaml
        hardware-type-1-sriov-node-policy-pfNames-1: "[\"ens5f0\"]"
        hardware-type-1-sriov-node-policy-pfNames-2: "[\"ens7f0\"]"
        # PerformanceProfile.yaml
        hardware-type-1-cpu-isolated: "2-31,34-63"
        hardware-type-1-cpu-reserved: "0-1,32-33"
        hardware-type-1-hugepages-default: "1G"
        hardware-type-1-hugepages-size: "1G"
        hardware-type-1-hugepages-count: "32"
      1 The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size.
    2. Create a ConfigMap CR named group-zones-configmap to hold the regional configuration. For example:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: group-zones-configmap
        namespace: ztp-group
      data:
        # ClusterLogForwarder.yaml
        zone-1-cluster-log-fwd-outputs: "[{\"type\":\"kafka\", \"name\":\"kafka-open\", \"url\":\"tcp://10.46.55.190:9092/test\"}]"
        zone-1-cluster-log-fwd-pipelines: "[{\"inputRefs\":[\"audit\", \"infrastructure\"], \"labels\": {\"label1\": \"test1\", \"label2\": \"test2\", \"label3\": \"test3\", \"label4\": \"test4\"}, \"name\": \"all-to-default\", \"outputRefs\": [\"kafka-open\"]}]"
    3. Create a ConfigMap CR named site-data-configmap to hold the site-specific configuration. For example:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: site-data-configmap
        namespace: ztp-group
      data:
        # SriovNetwork.yaml
        du-sno-1-zone-1-sriov-network-vlan-1: "140"
        du-sno-1-zone-1-sriov-network-vlan-2: "150"

    Each ConfigMap CR must be in the same namespace as the policy to be generated from the group PolicyGenerator CR.

  2. Commit the ConfigMap CRs in Git, and then push to the Git repository being monitored by the Argo CD application.

  3. Apply the hardware type and region labels to the clusters. The following command applies to a single cluster named du-sno-1-zone-1 and the labels chosen are "hardware-type": "hardware-type-1" and "group-du-sno-zone": "zone-1":

    $ oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{"metadata":{"labels":{"hardware-type": "hardware-type-1", "group-du-sno-zone": "zone-1"}}}'
  4. Depending on your requirements, Create a group PolicyGenerator or PolicyGentemplate CR that uses hub templates to obtain the required data from the ConfigMap objects:

    1. Create a group PolicyGenerator CR. This example PolicyGenerator CR configures logging, VLAN IDs, NICs and Performance Profile for the clusters that match the labels listed the under policyDefaults.placement field:

      ---
      apiVersion: policy.open-cluster-management.io/v1
      kind: PolicyGenerator
      metadata:
          name: group-du-sno-pgt
      placementBindingDefaults:
          name: group-du-sno-pgt-placement-binding
      policyDefaults:
          placement:
              labelSelector:
                  matchExpressions:
                      - key: group-du-sno-zone
                        operator: In
                        values:
                          - zone-1
                      - key: hardware-type
                        operator: In
                        values:
                          - hardware-type-1
          remediationAction: inform
          severity: low
          namespaceSelector:
              exclude:
                  - kube-*
              include:
                  - '*'
          evaluationInterval:
              compliant: 10m
              noncompliant: 10s
      policies:
          - name: group-du-sno-pgt-group-du-sno-cfg-policy
            policyAnnotations:
              ran.openshift.io/ztp-deploy-wave: "10"
            manifests:
              - path: source-crs/ClusterLogForwarder.yaml
                patches:
                  - spec:
                      outputs: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-outputs" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}'
                      pipelines: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-pipelines" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}'
              - path: source-crs/PerformanceProfile-MCP-master.yaml
                patches:
                  - metadata:
                      name: openshift-node-performance-profile
                    spec:
                      additionalKernelArgs:
                          - rcupdate.rcu_normal_after_boot=0
                          - vfio_pci.enable_sriov=1
                          - vfio_pci.disable_idle_d3=1
                          - efi=runtime
                      cpu:
                          isolated: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-isolated" (index .ManagedClusterLabels "hardware-type")) hub}}'
                          reserved: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-reserved" (index .ManagedClusterLabels "hardware-type")) hub}}'
                      hugepages:
                          defaultHugepagesSize: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-default" (index .ManagedClusterLabels "hardware-type")) hub}}'
                          pages:
                              - count: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-count" (index .ManagedClusterLabels "hardware-type")) | toInt hub}}'
                                size: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-size" (index .ManagedClusterLabels "hardware-type")) hub}}'
                      realTimeKernel:
                          enabled: true
          - name: group-du-sno-pgt-group-du-sno-sriov-policy
            policyAnnotations:
              ran.openshift.io/ztp-deploy-wave: "100"
            manifests:
              - path: source-crs/SriovNetwork.yaml
                patches:
                  - metadata:
                      name: sriov-nw-du-fh
                    spec:
                      resourceName: du_fh
                      vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-1" .ManagedClusterName) | toInt hub}}'
              - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml
                patches:
                  - metadata:
                      name: sriov-nnp-du-fh
                    spec:
                      deviceType: netdevice
                      isRdma: false
                      nicSelector:
                          pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-1" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}'
                      numVfs: 8
                      priority: 10
                      resourceName: du_fh
              - path: source-crs/SriovNetwork.yaml
                patches:
                  - metadata:
                      name: sriov-nw-du-mh
                    spec:
                      resourceName: du_mh
                      vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-2" .ManagedClusterName) | toInt hub}}'
              - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml
                patches:
                  - metadata:
                      name: sriov-nw-du-fh
                    spec:
                      deviceType: netdevice
                      isRdma: false
                      nicSelector:
                          pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-2" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}'
                      numVfs: 8
                      priority: 10
                      resourceName: du_fh
    2. Create a group PolicyGenTemplate CR. This example PolicyGenTemplate CR configures logging, VLAN IDs, NICs and Performance Profile for the clusters that match the labels listed under spec.bindingRules:

      apiVersion: ran.openshift.io/v1
      kind: PolicyGenTemplate
      metadata:
        name: group-du-sno-pgt
        namespace: ztp-group
      spec:
        bindingRules:
          # These policies will correspond to all clusters with these labels
          group-du-sno-zone: "zone-1"
          hardware-type: "hardware-type-1"
        mcp: "master"
        sourceFiles:
          - fileName: ClusterLogForwarder.yaml # wave 10
            policyName: "group-du-sno-cfg-policy"
            spec:
              outputs: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-outputs" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}'
              pipelines: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-pipelines" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}'
      
          - fileName: PerformanceProfile.yaml # wave 10
            policyName: "group-du-sno-cfg-policy"
            metadata:
              name: openshift-node-performance-profile
            spec:
              additionalKernelArgs:
              - rcupdate.rcu_normal_after_boot=0
              - vfio_pci.enable_sriov=1
              - vfio_pci.disable_idle_d3=1
              - efi=runtime
              cpu:
                isolated: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-isolated" (index .ManagedClusterLabels "hardware-type")) hub}}'
                reserved: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-reserved" (index .ManagedClusterLabels "hardware-type")) hub}}'
              hugepages:
                defaultHugepagesSize: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-default" (index .ManagedClusterLabels "hardware-type")) hub}}'
                pages:
                  - size: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-size" (index .ManagedClusterLabels "hardware-type")) hub}}'
                    count: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-count" (index .ManagedClusterLabels "hardware-type")) | toInt hub}}'
              realTimeKernel:
                enabled: true
      
          - fileName: SriovNetwork.yaml # wave 100
            policyName: "group-du-sno-sriov-policy"
            metadata:
              name: sriov-nw-du-fh
            spec:
              resourceName: du_fh
              vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-1" .ManagedClusterName) | toInt hub}}'
      
          - fileName: SriovNetworkNodePolicy.yaml # wave 100
            policyName: "group-du-sno-sriov-policy"
            metadata:
              name: sriov-nnp-du-fh
            spec:
              deviceType: netdevice
              isRdma: false
              nicSelector:
                pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-1" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}'
              numVfs: 8
              priority: 10
              resourceName: du_fh
      
          - fileName: SriovNetwork.yaml # wave 100
            policyName: "group-du-sno-sriov-policy"
            metadata:
              name: sriov-nw-du-mh
            spec:
              resourceName: du_mh
              vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-2" .ManagedClusterName) | toInt hub}}'
      
          - fileName: SriovNetworkNodePolicy.yaml # wave 100
            policyName: "group-du-sno-sriov-policy"
            metadata:
              name: sriov-nw-du-fh
            spec:
              deviceType: netdevice
              isRdma: false
              nicSelector:
                pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-2" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}'
              numVfs: 8
              priority: 10
              resourceName: du_fh

    To retrieve site-specific configuration values, use the .ManagedClusterName field. This is a template context value set to the name of the target managed cluster.

    To retrieve group-specific configuration, use the .ManagedClusterLabels field. This is a template context value set to the value of the managed cluster’s labels.

  5. Commit the site PolicyGenerator or PolicyGentemplate CR in Git and push to the Git repository that is monitored by the ArgoCD application.

    Subsequent changes to the referenced ConfigMap CR are not automatically synced to the applied policies. You need to manually sync the new ConfigMap changes to update existing PolicyGenerator CRs. See "Syncing new ConfigMap changes to existing PolicyGenerator or PolicyGenTemplate CRs".

    You can use the same PolicyGenerator or PolicyGentemplate CR for multiple clusters. If there is a configuration change, then the only modifications you need to make are to the ConfigMap objects that hold the configuration for each cluster and the labels of the managed clusters.

Syncing new ConfigMap changes to existing PolicyGenerator or PolicyGentemplate CRs

Prerequisites
  • You have installed the OpenShift CLI (oc).

  • You have logged in to the hub cluster as a user with cluster-admin privileges.

  • You have created a PolicyGenerator or PolicyGentemplate CR that pulls information from a ConfigMap CR using hub cluster templates.

Procedure
  1. Update the contents of your ConfigMap CR, and apply the changes in the hub cluster.

  2. To sync the contents of the updated ConfigMap CR to the deployed policy, do either of the following:

    1. Option 1: Delete the existing policy. ArgoCD uses the PolicyGenerator or PolicyGentemplate CR to immediately recreate the deleted policy. For example, run the following command:

      $ oc delete policy <policy_name> -n <policy_namespace>
    2. Option 2: Apply a special annotation policy.open-cluster-management.io/trigger-update to the policy with a different value every time when you update the ConfigMap. For example:

      $ oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1"

      You must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing.

  3. Optional: If it exists, delete the ClusterGroupUpdate CR that contains the policy. For example:

    $ oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>
    1. Create a new ClusterGroupUpdate CR that includes the policy to apply with the updated ConfigMap changes. For example, add the following YAML to the file cgr-example.yaml:

      apiVersion: ran.openshift.io/v1alpha1
      kind: ClusterGroupUpgrade
      metadata:
        name: <cgr_name>
        namespace: <policy_namespace>
      spec:
        managedPolicies:
          - <managed_policy>
        enable: true
        clusters:
        - <managed_cluster_1>
        - <managed_cluster_2>
        remediationStrategy:
          maxConcurrency: 2
          timeout: 240
    2. Apply the updated policy:

      $ oc apply -f cgr-example.yaml