×

Precision Time Protocol (PTP) hardware is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

About PTP hardware

OpenShift Container Platform includes the capability to use PTP hardware on your nodes. You can configure linuxptp services on nodes with PTP capable hardware.

The PTP Operator works with PTP capable devices on clusters provisioned only on bare metal infrastructure.

You can use the OpenShift Container Platform console to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services. The Operator provides following features:

  • Discover the PTP capable device in cluster.

  • Manage configuration of linuxptp services.

Installing the PTP Operator

As a cluster administrator, you can install the PTP Operator using the OpenShift Container Platform CLI or the web console.

CLI: Installing the PTP Operator

As a cluster administrator, you can install the Operator using the CLI.

Prerequisites
  • A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP.

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure
  1. To create a namespace for the PTP Operator, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-ptp
      labels:
        openshift.io/run-level: "1"
  2. To create an Operator group for the Operator, enter the following command:

    $ cat << EOF| oc create -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: ptp-operators
      namespace: openshift-ptp
    spec:
      targetNamespaces:
      - openshift-ptp
    EOF
  3. Subscribe to the PTP Operator.

    1. Run the following command to set the OpenShift Container Platform major and minor version as an environment variable, which is used as the channel value in the next step.

      $ OC_VERSION=$(oc version -o yaml | grep openshiftVersion | \
          grep -o '[0-9]*[.][0-9]*' | head -1)
    2. To create a subscription for the PTP Operator, enter the following command:

      $ cat << EOF| oc create -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: ptp-operator-subscription
        namespace: openshift-ptp
      spec:
        channel: "${OC_VERSION}"
        name: ptp-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace
      EOF
  4. To verify that the Operator is installed, enter the following command:

    $ oc get csv -n openshift-ptp \
      -o custom-columns=Name:.metadata.name,Phase:.status.phase
    Example output
    Name                                        Phase
    ptp-operator.4.4.0-202006160135             Succeeded

Web console: Installing the PTP Operator

As a cluster administrator, you can install the Operator using the web console.

You have to create the namespace and operator group as mentioned in the previous section.

Procedure
  1. Install the PTP Operator using the OpenShift Container Platform web console:

    1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.

    2. Choose PTP Operator from the list of available Operators, and then click Install.

    3. On the Create Operator Subscription page, under A specific namespace on the cluster select openshift-ptp. Then, click Subscribe.

  2. Optional: Verify that the PTP Operator installed successfully:

    1. Switch to the OperatorsInstalled Operators page.

    2. Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded.

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the operator does not appear as installed, to troubleshoot further:

      • Go to the OperatorsInstalled Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.

      • Go to the WorkloadsPods page and check the logs for pods in the openshift-ptp project.

Automated discovery of PTP network devices

The PTP Operator adds the NodePtpDevice.ptp.openshift.io custom resource definition (CRD) to OpenShift Container Platform. The PTP Operator will search your cluster for PTP capable network devices on each node. The Operator creates and updates a NodePtpDevice custom resource (CR) object for each node that provides a compatible PTP device.

One CR is created for each node, and shares the same name as the node. The .status.devices list provides information about the PTP devices on a node.

The following is an example of a NodePtpDevice CR created by the PTP Operator:

apiVersion: ptp.openshift.io/v1
kind: NodePtpDevice
metadata:
  creationTimestamp: "2019-11-15T08:57:11Z"
  generation: 1
  name: dev-worker-0 (1)
  namespace: openshift-ptp (2)
  resourceVersion: "487462"
  selfLink: /apis/ptp.openshift.io/v1/namespaces/openshift-ptp/nodeptpdevices/dev-worker-0
  uid: 08d133f7-aae2-403f-84ad-1fe624e5ab3f
spec: {}
status:
  devices: (3)
  - name: eno1
  - name: eno2
  - name: ens787f0
  - name: ens787f1
  - name: ens801f0
  - name: ens801f1
  - name: ens802f0
  - name: ens802f1
  - name: ens803
1 The value for the name parameter is the same as the name of the node.
2 The CR is created in openshift-ptp namespace by PTP Operator.
3 The devices collection includes a list of all of the PTP capable devices discovered by the Operator on the node.

Configuring Linuxptp services

The PTP Operator adds the PtpConfig.ptp.openshift.io custom resource definition (CRD) to OpenShift Container Platform. You can configure the Linuxptp services (ptp4l, phc2sys) by creating a PtpConfig custom resource (CR) object.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • You must have installed the PTP Operator.

Procedure
  1. Create the following PtpConfig CR, and then save the YAML in the <name>-ptp-config.yaml file. Replace <name> with the name for this configuration.

    apiVersion: ptp.openshift.io/v1
    kind: PtpConfig
    metadata:
      name: <name> (1)
      namespace: openshift-ptp (2)
    spec:
      profile: (3)
      - name: "profile1" (4)
        interface: "ens787f1" (5)
        ptp4lOpts: "-s -2" (6)
        phc2sysOpts: "-a -r" (7)
      recommend: (8)
      - profile: "profile1" (9)
        priority: 10 (10)
        match: (11)
        - nodeLabel: "node-role.kubernetes.io/worker" (12)
          nodeName: "dev-worker-0" (13)
    1 Specify a name for the PtpConfig CR.
    2 Specify the namespace where the PTP Operator is installed.
    3 Specify an array of one or more profile objects.
    4 Specify the name of a profile object which is used to uniquely identify a profile object.
    5 Specify the network interface name to use by the ptp4l service, for example ens787f1.
    6 Specify system config options for the ptp4l service, for example -s -2. This should not include the interface name -i <interface> and service config file -f /etc/ptp4l.conf because these will be automatically appended.
    7 Specify system config options for the phc2sys service, for example -a -r.
    8 Specify an array of one or more recommend objects which define rules on how the profile should be applied to nodes.
    9 Specify the profile object name defined in the profile section.
    10 Specify the priority with an integer value between 0 and 99. A larger number gets lower priority, so a priority of 99 is lower than a priority of 10. If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority will be applied to that node.
    11 Specify match rules with nodeLabel or nodeName.
    12 Specify nodeLabel with the key of node.Labels from the node object.
    13 Specify nodeName with node.Name from the node object.
  2. Create the CR by running the following command:

    $ oc create -f <filename> (1)
    1 Replace <filename> with the name of the file you created in the previous step.
  3. Optional: Check that the PtpConfig profile is applied to nodes that match with nodeLabel or nodeName.

    $ oc get pods -n openshift-ptp -o wide
    Example output
    NAME                            READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
    linuxptp-daemon-4xkbb           1/1     Running   0          43m   192.168.111.15   dev-worker-0   <none>           <none>
    linuxptp-daemon-tdspf           1/1     Running   0          43m   192.168.111.11   dev-master-0   <none>           <none>
    ptp-operator-657bbb64c8-2f8sj   1/1     Running   0          43m   10.128.0.116     dev-master-0   <none>           <none>
    
    $ oc logs linuxptp-daemon-4xkbb -n openshift-ptp
    I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
    I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
    I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
    I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 (1)
    I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1    (2)
    I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -s -2       (3)
    I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r     (4)
    I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------
    I1115 09:41:18.117934 4143292 daemon.go:186] Starting phc2sys...
    I1115 09:41:18.117985 4143292 daemon.go:187] phc2sys cmd: &{Path:/usr/sbin/phc2sys Args:[/usr/sbin/phc2sys -a -r] Env:[] Dir: Stdin:<nil> Stdout:<nil> Stderr:<nil> ExtraFiles:[] SysProcAttr:<nil> Process:<nil> ProcessState:<nil> ctx:<nil> lookPathErr:<nil> finished:false childFiles:[] closeAfterStart:[] closeAfterWait:[] goroutine:[] errch:<nil> waitDone:<nil>}
    I1115 09:41:19.118175 4143292 daemon.go:186] Starting ptp4l...
    I1115 09:41:19.118209 4143292 daemon.go:187] ptp4l cmd: &{Path:/usr/sbin/ptp4l Args:[/usr/sbin/ptp4l -m -f /etc/ptp4l.conf -i ens787f1 -s -2] Env:[] Dir: Stdin:<nil> Stdout:<nil> Stderr:<nil> ExtraFiles:[] SysProcAttr:<nil> Process:<nil> ProcessState:<nil> ctx:<nil> lookPathErr:<nil> finished:false childFiles:[] closeAfterStart:[] closeAfterWait:[] goroutine:[] errch:<nil> waitDone:<nil>}
    ptp4l[102189.864]: selected /dev/ptp5 as PTP clock
    ptp4l[102189.886]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE
    ptp4l[102189.886]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE
    1 Profile Name is the name that is applied to node dev-worker-0.
    2 Interface is the PTP device specified in the profile1 interface field. The ptp4l service runs on this interface.
    3 Ptp4lOpts are the ptp4l sysconfig options specified in profile1 Ptp4lOpts field.
    4 Phc2sysOpts are the phc2sys sysconfig options specified in profile1 Phc2sysOpts field.