×

After you deploy hosted control planes on IBM Power, you can manage a hosted cluster by completing the following tasks.

Creating an InfraEnv resource for hosted control planes on IBM Power

An InfraEnv is a environment where hosts that are starting the live ISO can join as agents. In this case, the agents are created in the same namespace as your hosted control plane.

You can create an InfraEnv resource for hosted control planes on 64-bit x86 bare metal for IBM Power compute nodes.

Procedure
  1. Create a YAML file to configure an InfraEnv resource. See the following example:

    apiVersion: agent-install.openshift.io/v1beta1
    kind: InfraEnv
    metadata:
      name: <hosted_cluster_name> \(1)
      namespace: <hosted_control_plane_namespace> \(2)
    spec:
      cpuArchitecture: ppc64le
      pullSecretRef:
        name: pull-secret
      sshAuthorizedKey: <path_to_ssh_public_key> (3)
    1 Replace <hosted_cluster_name> with the name of your hosted cluster.
    2 Replace <hosted_control_plane_namespace> with the name of the hosted control plane namespace, for example, clusters-hosted.
    3 Replace <path_to_ssh_public_key> with the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub.
  2. Save the file as infraenv-config.yaml.

  3. Apply the configuration by entering the following command:

    $ oc apply -f infraenv-config.yaml
  4. To fetch the URL to download the live ISO, which allows IBM Power machines to join as agents, enter the following command:

    $ oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> -o json

Adding IBM Power agents to the InfraEnv resource

You can add agents by manually configuring the machine to start with the live ISO.

Procedure
  1. Download the live ISO and use it to start a bare metal or a virtual machine (VM) host. You can find the URL for the live ISO in the status.isoDownloadURL field, in the InfraEnv resource. At startup, the host communicates with the Assisted Service and registers as an agent in the same namespace as the InfraEnv resource.

  2. To list the agents and some of their properties, enter the following command:

    $ oc -n <hosted_control_plane_namespace> get agents
    Example output
    NAME                                   CLUSTER   APPROVED   ROLE          STAGE
    86f7ac75-4fc4-4b36-8130-40fa12602218                        auto-assign
    e57a637f-745b-496e-971d-1abbf03341ba                        auto-assign
  3. After each agent is created, you can optionally set the installation_disk_id and hostname for an agent:

    1. To set the installation_disk_id field for an agent, enter the following command:

      $ oc -n <hosted_control_plane_namespace> patch agent <agent_name> -p '{"spec":{"installation_disk_id":"<installation_disk_id>","approved":true}}' --type merge
    2. To set the hostname field for an agent, enter the following command:

      $ oc -n <hosted_control_plane_namespace> patch agent <agent_name> -p '{"spec":{"hostname":"<hostname>","approved":true}}' --type merge
Verification
  • To verify that the agents are approved for use, enter the following command:

    $ oc -n <hosted_control_plane_namespace> get agents
    Example output
    NAME                                   CLUSTER   APPROVED   ROLE          STAGE
    86f7ac75-4fc4-4b36-8130-40fa12602218             true       auto-assign
    e57a637f-745b-496e-971d-1abbf03341ba             true       auto-assign

Scaling the NodePool object for a hosted cluster on IBM Power

The NodePool object is created when you create a hosted cluster. By scaling the NodePool object, you can add more compute nodes to hosted control planes.

Procedure
  1. Run the following command to scale the NodePool object to two nodes:

    $ oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2

    The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OpenShift Container Platform nodes. The agents pass through the transition phases in the following order:

    • binding

    • discovering

    • insufficient

    • installing

    • installing-in-progress

    • added-to-existing-cluster

  2. Run the following command to see the status of a specific scaled agent:

    $ oc -n <hosted_control_plane_namespace> get agent -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'
    Example output
    BMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound
    BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient
  3. Run the following command to see the transition phases:

    $ oc -n <hosted_control_plane_namespace> get agent
    Example output
    NAME                                   CLUSTER            APPROVED       ROLE          STAGE
    50c23cda-cedc-9bbd-bcf1-9b3a5c75804d   hosted-forwarder   true           auto-assign
    5e498cd3-542c-e54f-0c58-ed43e28b568a                      true           auto-assign
    da503cf1-a347-44f2-875c-4960ddb04091   hosted-forwarder   true           auto-assign
  4. Run the following command to generate the kubeconfig file to access the hosted cluster:

    $ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
  5. After the agents reach the added-to-existing-cluster state, verify that you can see the OpenShift Container Platform nodes by entering the following command:

    $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
    Example output
    NAME                             STATUS   ROLES    AGE      VERSION
    worker-zvm-0.hostedn.example.com Ready    worker   5m41s    v1.24.0+3882f8f
    worker-zvm-1.hostedn.example.com Ready    worker   6m3s     v1.24.0+3882f8f
  6. Enter the following command to verify that two machines were created when you scaled up the NodePool object:

    $ oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.io
    Example output
    NAME                                CLUSTER                  NODENAME                           PROVIDERID                                     PHASE     AGE   VERSION
    hosted-forwarder-79558597ff-5tbqp   hosted-forwarder-crqq5   worker-zvm-0.hostedn.example.com   agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d   Running   41h   4.15.0
    hosted-forwarder-79558597ff-lfjfk   hosted-forwarder-crqq5   worker-zvm-1.hostedn.example.com   agent://5e498cd3-542c-e54f-0c58-ed43e28b568a   Running   41h   4.15.0
  7. Run the following command to check the cluster version:

    $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion
    Example output
    NAME                                         VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
    clusterversion.config.openshift.io/version   4.15.0        True        False         40h     Cluster version is 4.15.0
  8. Run the following command to check the Cluster Operator status:

    $ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperators

    For each component of your cluster, the output shows the following Cluster Operator statuses:

    • NAME

    • VERSION

    • AVAILABLE

    • PROGRESSING

    • DEGRADED

    • SINCE

    • MESSAGE