argocd.argoproj.io/sync-options: Replace=true
Topology Aware Lifecycle Manager supports Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps Zero Touch Provisioning (ZTP).
Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similiar configurations but with different values.
Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means you must create the objects referenced in the hub template in the same namespace where the policy is created. |
Using For more information about |
You can manage the configuration of fleets of clusters with ConfigMap
CRs by using hub templates to populate the group and site values in the generated policies that get applied to the managed clusters.
Using hub templates in site PolicyGenerator
or PolicyGentemplate
CRs means that you do not need to create a policy CR for each site.
You can group the clusters in a fleet in various categories, depending on the use case, for example hardware type or region.
Each cluster should have a label corresponding to the group or groups that the cluster is in.
If you manage the configuration values for each group in different ConfigMap
CRs, then you require only one group policy CR to apply the changes to all the clusters in the group by using hub templates.
The following example shows you how to use three ConfigMap
CRs and one PolicyGenerator
CR to apply both site and group configuration to clusters grouped by hardware type and region.
There is a 1 MiB size limit (Kubernetes documentation) for
|
You have installed the OpenShift CLI (oc
).
You have logged in to the hub cluster as a user with cluster-admin
privileges.
You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application.
Create three ConfigMap
CRs that contain the group and site configuration:
Create a ConfigMap
CR named group-hardware-types-configmap
to hold the hardware-specific configuration. For example:
apiVersion: v1
kind: ConfigMap
metadata:
name: group-hardware-types-configmap
namespace: ztp-group
annotations:
argocd.argoproj.io/sync-options: Replace=true (1)
data:
# SriovNetworkNodePolicy.yaml
hardware-type-1-sriov-node-policy-pfNames-1: "[\"ens5f0\"]"
hardware-type-1-sriov-node-policy-pfNames-2: "[\"ens7f0\"]"
# PerformanceProfile.yaml
hardware-type-1-cpu-isolated: "2-31,34-63"
hardware-type-1-cpu-reserved: "0-1,32-33"
hardware-type-1-hugepages-default: "1G"
hardware-type-1-hugepages-size: "1G"
hardware-type-1-hugepages-count: "32"
1 | The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size. |
Create a ConfigMap
CR named group-zones-configmap
to hold the regional configuration. For example:
apiVersion: v1
kind: ConfigMap
metadata:
name: group-zones-configmap
namespace: ztp-group
data:
# ClusterLogForwarder.yaml
zone-1-cluster-log-fwd-outputs: "[{\"type\":\"kafka\", \"name\":\"kafka-open\", \"url\":\"tcp://10.46.55.190:9092/test\"}]"
zone-1-cluster-log-fwd-pipelines: "[{\"inputRefs\":[\"audit\", \"infrastructure\"], \"labels\": {\"label1\": \"test1\", \"label2\": \"test2\", \"label3\": \"test3\", \"label4\": \"test4\"}, \"name\": \"all-to-default\", \"outputRefs\": [\"kafka-open\"]}]"
Create a ConfigMap
CR named site-data-configmap
to hold the site-specific configuration. For example:
apiVersion: v1
kind: ConfigMap
metadata:
name: site-data-configmap
namespace: ztp-group
data:
# SriovNetwork.yaml
du-sno-1-zone-1-sriov-network-vlan-1: "140"
du-sno-1-zone-1-sriov-network-vlan-2: "150"
Each |
Commit the ConfigMap
CRs in Git, and then push to the Git repository being monitored by the Argo CD application.
Apply the hardware type and region labels to the clusters.
The following command applies to a single cluster named du-sno-1-zone-1
and the labels chosen are "hardware-type": "hardware-type-1"
and "group-du-sno-zone": "zone-1"
:
$ oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{"metadata":{"labels":{"hardware-type": "hardware-type-1", "group-du-sno-zone": "zone-1"}}}'
Depending on your requirements, Create a group PolicyGenerator
or PolicyGentemplate
CR that uses hub templates to obtain the required data from the ConfigMap
objects:
Create a group PolicyGenerator
CR.
This example PolicyGenerator
CR configures logging, VLAN IDs, NICs and Performance Profile for the clusters that match the labels listed the under policyDefaults.placement
field:
---
apiVersion: policy.open-cluster-management.io/v1
kind: PolicyGenerator
metadata:
name: group-du-sno-pgt
placementBindingDefaults:
name: group-du-sno-pgt-placement-binding
policyDefaults:
placement:
labelSelector:
matchExpressions:
- key: group-du-sno-zone
operator: In
values:
- zone-1
- key: hardware-type
operator: In
values:
- hardware-type-1
remediationAction: inform
severity: low
namespaceSelector:
exclude:
- kube-*
include:
- '*'
evaluationInterval:
compliant: 10m
noncompliant: 10s
policies:
- name: group-du-sno-pgt-group-du-sno-cfg-policy
policyAnnotations:
ran.openshift.io/ztp-deploy-wave: "10"
manifests:
- path: source-crs/ClusterLogForwarder.yaml
patches:
- spec:
outputs: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-outputs" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}'
pipelines: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-pipelines" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}'
- path: source-crs/PerformanceProfile-MCP-master.yaml
patches:
- metadata:
name: openshift-node-performance-profile
spec:
additionalKernelArgs:
- rcupdate.rcu_normal_after_boot=0
- vfio_pci.enable_sriov=1
- vfio_pci.disable_idle_d3=1
- efi=runtime
cpu:
isolated: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-isolated" (index .ManagedClusterLabels "hardware-type")) hub}}'
reserved: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-reserved" (index .ManagedClusterLabels "hardware-type")) hub}}'
hugepages:
defaultHugepagesSize: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-default" (index .ManagedClusterLabels "hardware-type")) hub}}'
pages:
- count: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-count" (index .ManagedClusterLabels "hardware-type")) | toInt hub}}'
size: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-size" (index .ManagedClusterLabels "hardware-type")) hub}}'
realTimeKernel:
enabled: true
- name: group-du-sno-pgt-group-du-sno-sriov-policy
policyAnnotations:
ran.openshift.io/ztp-deploy-wave: "100"
manifests:
- path: source-crs/SriovNetwork.yaml
patches:
- metadata:
name: sriov-nw-du-fh
spec:
resourceName: du_fh
vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-1" .ManagedClusterName) | toInt hub}}'
- path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml
patches:
- metadata:
name: sriov-nnp-du-fh
spec:
deviceType: netdevice
isRdma: false
nicSelector:
pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-1" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}'
numVfs: 8
priority: 10
resourceName: du_fh
- path: source-crs/SriovNetwork.yaml
patches:
- metadata:
name: sriov-nw-du-mh
spec:
resourceName: du_mh
vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-2" .ManagedClusterName) | toInt hub}}'
- path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml
patches:
- metadata:
name: sriov-nw-du-fh
spec:
deviceType: netdevice
isRdma: false
nicSelector:
pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-2" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}'
numVfs: 8
priority: 10
resourceName: du_fh
Create a group PolicyGenTemplate
CR.
This example PolicyGenTemplate
CR configures logging, VLAN IDs, NICs and Performance Profile for the clusters that match the labels listed under spec.bindingRules
:
apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
name: group-du-sno-pgt
namespace: ztp-group
spec:
bindingRules:
# These policies will correspond to all clusters with these labels
group-du-sno-zone: "zone-1"
hardware-type: "hardware-type-1"
mcp: "master"
sourceFiles:
- fileName: ClusterLogForwarder.yaml # wave 10
policyName: "group-du-sno-cfg-policy"
spec:
outputs: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-outputs" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}'
pipelines: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-pipelines" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}'
- fileName: PerformanceProfile.yaml # wave 10
policyName: "group-du-sno-cfg-policy"
metadata:
name: openshift-node-performance-profile
spec:
additionalKernelArgs:
- rcupdate.rcu_normal_after_boot=0
- vfio_pci.enable_sriov=1
- vfio_pci.disable_idle_d3=1
- efi=runtime
cpu:
isolated: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-isolated" (index .ManagedClusterLabels "hardware-type")) hub}}'
reserved: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-reserved" (index .ManagedClusterLabels "hardware-type")) hub}}'
hugepages:
defaultHugepagesSize: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-default" (index .ManagedClusterLabels "hardware-type")) hub}}'
pages:
- size: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-size" (index .ManagedClusterLabels "hardware-type")) hub}}'
count: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-count" (index .ManagedClusterLabels "hardware-type")) | toInt hub}}'
realTimeKernel:
enabled: true
- fileName: SriovNetwork.yaml # wave 100
policyName: "group-du-sno-sriov-policy"
metadata:
name: sriov-nw-du-fh
spec:
resourceName: du_fh
vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-1" .ManagedClusterName) | toInt hub}}'
- fileName: SriovNetworkNodePolicy.yaml # wave 100
policyName: "group-du-sno-sriov-policy"
metadata:
name: sriov-nnp-du-fh
spec:
deviceType: netdevice
isRdma: false
nicSelector:
pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-1" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}'
numVfs: 8
priority: 10
resourceName: du_fh
- fileName: SriovNetwork.yaml # wave 100
policyName: "group-du-sno-sriov-policy"
metadata:
name: sriov-nw-du-mh
spec:
resourceName: du_mh
vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-2" .ManagedClusterName) | toInt hub}}'
- fileName: SriovNetworkNodePolicy.yaml # wave 100
policyName: "group-du-sno-sriov-policy"
metadata:
name: sriov-nw-du-fh
spec:
deviceType: netdevice
isRdma: false
nicSelector:
pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-2" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}'
numVfs: 8
priority: 10
resourceName: du_fh
To retrieve site-specific configuration values, use the To retrieve group-specific configuration, use the |
Commit the site PolicyGenerator
or PolicyGentemplate
CR in Git and push to the Git repository that is monitored by the ArgoCD application.
Subsequent changes to the referenced You can use the same |
You have installed the OpenShift CLI (oc
).
You have logged in to the hub cluster as a user with cluster-admin
privileges.
You have created a PolicyGenerator
or PolicyGentemplate
CR that pulls information from a ConfigMap
CR using hub cluster templates.
Update the contents of your ConfigMap
CR, and apply the changes in the hub cluster.
To sync the contents of the updated ConfigMap
CR to the deployed policy, do either of the following:
Option 1: Delete the existing policy. ArgoCD uses the PolicyGenerator
or PolicyGentemplate
CR to immediately recreate the deleted policy. For example, run the following command:
$ oc delete policy <policy_name> -n <policy_namespace>
Option 2: Apply a special annotation policy.open-cluster-management.io/trigger-update
to the policy with a different value every time when you update the ConfigMap
. For example:
$ oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1"
You must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing. |
Optional: If it exists, delete the ClusterGroupUpdate
CR that contains the policy. For example:
$ oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>
Create a new ClusterGroupUpdate
CR that includes the policy to apply with the updated ConfigMap
changes. For example, add the following YAML to the file cgr-example.yaml
:
apiVersion: ran.openshift.io/v1alpha1
kind: ClusterGroupUpgrade
metadata:
name: <cgr_name>
namespace: <policy_namespace>
spec:
managedPolicies:
- <managed_policy>
enable: true
clusters:
- <managed_cluster_1>
- <managed_cluster_2>
remediationStrategy:
maxConcurrency: 2
timeout: 240
Apply the updated policy:
$ oc apply -f cgr-example.yaml