×

Precision Time Protocol (PTP) hardware with single NIC configured as boundary clock is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

About PTP hardware

You can configure linuxptp services and use PTP-capable hardware in OpenShift Container Platform cluster nodes.

The PTP Operator works with PTP-capable devices on clusters provisioned only on bare-metal infrastructure.

You can use the OpenShift Container Platform console or OpenShift CLI (oc) to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services and provides the following features:

  • Discovery of the PTP-capable devices in the cluster.

  • Management of the configuration of linuxptp services.

  • Notification of PTP clock events that negatively affect the performance and reliability of your application with the PTP Operator cloud-event-proxy sidecar.

About PTP

Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (NTP).

The linuxptp package includes the ptp4l and phc2sys programs for clock synchronization. ptp4l implements the PTP boundary clock and ordinary clock. ptp4l synchronizes the PTP hardware clock to the source clock with hardware time stamping and synchronizes the system clock to the source clock with software time stamping. phc2sys is used for hardware time stamping to synchronize the system clock to the PTP hardware clock on the network interface controller (NIC).

Elements of a PTP domain

PTP is used to synchronize multiple nodes connected in a network, with clocks for each node. The clocks synchronized by PTP are organized in a source-destination hierarchy. The hierarchy is created and updated automatically by the best master clock (BMC) algorithm, which runs on every clock. Destination clocks are synchronized to source clocks, and destination clocks can themselves be the source for other downstream clocks. The following types of clocks can be included in configurations:

Grandmaster clock

The grandmaster clock provides standard time information to other clocks across the network and ensures accurate and stable synchronisation. It writes time stamps and responds to time requests from other clocks. Grandmaster clocks can be synchronized to a Global Positioning System (GPS) time source.

Ordinary clock

The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write time stamps.

Boundary clock

The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock.

Advantages of PTP over NTP

One of the main advantages that PTP has over NTP is the hardware support present in various network interface controllers (NIC) and network switches. The specialized hardware allows PTP to account for delays in message transfer and improves the accuracy of time synchronization. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled.

Hardware-based PTP provides optimal accuracy, since the NIC can time stamp the PTP packets at the exact moment they are sent and received. Compare this to software-based PTP, which requires additional processing of the PTP packets by the operating system.

Before enabling PTP, ensure that NTP is disabled for the required nodes. You can disable the chrony time service (chronyd) using a MachineConfig custom resource. For more information, see Disabling chrony time service.

Installing the PTP Operator using the CLI

As a cluster administrator, you can install the Operator by using the CLI.

Prerequisites
  • A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP.

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure
  1. Create a namespace for the PTP Operator.

    1. Save the following YAML in the ptp-namespace.yaml file:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: openshift-ptp
        annotations:
          workload.openshift.io/allowed: management
        labels:
          name: openshift-ptp
          openshift.io/cluster-monitoring: "true"
    2. Create the Namespace CR:

      $ oc create -f ptp-namespace.yaml
  2. Create an Operator group for the PTP Operator.

    1. Save the following YAML in the ptp-operatorgroup.yaml file:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: ptp-operators
        namespace: openshift-ptp
      spec:
        targetNamespaces:
        - openshift-ptp
    2. Create the OperatorGroup CR:

      $ oc create -f ptp-operatorgroup.yaml
  3. Subscribe to the PTP Operator.

    1. Save the following YAML in the ptp-sub.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: ptp-operator-subscription
        namespace: openshift-ptp
      spec:
        channel: "stable"
        name: ptp-operator
        source: redhat-operators
        sourceNamespace: openshift-marketplace
    2. Create the Subscription CR:

      $ oc create -f ptp-sub.yaml
  4. To verify that the Operator is installed, enter the following command:

    $ oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase
    Example output
    Name                         Phase
    4.10.0-202201261535          Succeeded

Installing the PTP Operator using the web console

As a cluster administrator, you can install the PTP Operator using the web console.

You have to create the namespace and operator group as mentioned in the previous section.

Procedure
  1. Install the PTP Operator using the OpenShift Container Platform web console:

    1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.

    2. Choose PTP Operator from the list of available Operators, and then click Install.

    3. On the Install Operator page, under A specific namespace on the cluster select openshift-ptp. Then, click Install.

  2. Optional: Verify that the PTP Operator installed successfully:

    1. Switch to the OperatorsInstalled Operators page.

    2. Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded.

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the operator does not appear as installed, to troubleshoot further:

      • Go to the OperatorsInstalled Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.

      • Go to the WorkloadsPods page and check the logs for pods in the openshift-ptp project.

Configuring PTP devices

The PTP Operator adds the NodePtpDevice.ptp.openshift.io custom resource definition (CRD) to OpenShift Container Platform.

When installed, the PTP Operator searches your cluster for PTP-capable network devices on each node. It creates and updates a NodePtpDevice custom resource (CR) object for each node that provides a compatible PTP-capable network device.

Discovering PTP capable network devices in your cluster

  • To return a complete list of PTP capable network devices in your cluster, run the following command:

    $ oc get NodePtpDevice -n openshift-ptp -o yaml
    Example output
    apiVersion: v1
    items:
    - apiVersion: ptp.openshift.io/v1
      kind: NodePtpDevice
      metadata:
        creationTimestamp: "2022-01-27T15:16:28Z"
        generation: 1
        name: dev-worker-0 (1)
        namespace: openshift-ptp
        resourceVersion: "6538103"
        uid: d42fc9ad-bcbf-4590-b6d8-b676c642781a
      spec: {}
      status:
        devices: (2)
        - name: eno1
        - name: eno2
        - name: eno3
        - name: eno4
        - name: enp5s0f0
        - name: enp5s0f1
    ...
    1 The value for the name parameter is the same as the name of the parent node.
    2 The devices collection includes a list of the PTP capable devices that the PTP Operator discovers for the node.

Configuring linuxptp services as an ordinary clock

You can configure linuxptp services (ptp4l, phc2sys) as ordinary clock by creating a PtpConfig custom resource (CR) object.

The following PtpConfig CR configures PTP fast events by setting values for ptp4lOpts, ptp4lConf and ptpClockThreshold.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Install the PTP Operator.

Procedure
  1. Create the following PtpConfig CR, and then save the YAML in the ordinary-clock-ptp-config.yaml file.

    apiVersion: ptp.openshift.io/v1
    kind: PtpConfig
    metadata:
      name: ordinary-clock-ptp-config (1)
      namespace: openshift-ptp
    spec:
      profile: (2)
      - name: "profile1" (3)
        interface: "ens787f1" (4)
        ptp4lOpts: "-2 -s --summary_interval -4" (5)
        phc2sysOpts: "-a -r -n 24" (6)
        ptp4lConf: | (7)
         [global]
          #
          # Default Data Set
          #
          twoStepFlag 1
          slaveOnly 0
          priority1 128
          priority2 128
          domainNumber 24
          #utc_offset 37
          clockClass 248
          clockAccuracy 0xFE
          offsetScaledLogVariance 0xFFFF
          free_running 0
          freq_est_interval 1
          dscp_event 0
          dscp_general 0
          dataset_comparison ieee1588
          G.8275.defaultDS.localPriority 128
          #
          # Port Data Set
          #
          logAnnounceInterval -3
          logSyncInterval -4
          logMinDelayReqInterval -4
          logMinPdelayReqInterval -4
          announceReceiptTimeout 3
          syncReceiptTimeout 0
          delayAsymmetry 0
          fault_reset_interval 4
          neighborPropDelayThresh 20000000
          masterOnly 0
          G.8275.portDS.localPriority 128
          #
          # Run time options
          #
          assume_two_step 0
          logging_level 6
          path_trace_enabled 0
          follow_up_info 0
          hybrid_e2e 0
          inhibit_multicast_service 0
          net_sync_monitor 0
          tc_spanning_tree 0
          tx_timestamp_timeout 1
          unicast_listen 0
          unicast_master_table 0
          unicast_req_duration 3600
          use_syslog 1
          verbose 0
          summary_interval 0
          kernel_leap 1
          check_fup_sync 0
          #
          # Servo Options
          #
          pi_proportional_const 0.0
          pi_integral_const 0.0
          pi_proportional_scale 0.0
          pi_proportional_exponent -0.3
          pi_proportional_norm_max 0.7
          pi_integral_scale 0.0
          pi_integral_exponent 0.4
          pi_integral_norm_max 0.3
          step_threshold 0.0
          first_step_threshold 0.00002
          max_frequency 900000000
          clock_servo pi
          sanity_freq_limit 200000000
          ntpshm_segment 0
          #
          # Transport options
          #
          transportSpecific 0x0
          ptp_dst_mac 01:1B:19:00:00:00
          p2p_dst_mac 01:80:C2:00:00:0E
          udp_ttl 1
          udp6_scope 0x0E
          uds_address /var/run/ptp4l
          #
          # Default interface options
          #
          clock_type OC
          network_transport UDPv4
          delay_mechanism E2E
          time_stamping hardware
          tsproc_mode filter
          delay_filter moving_median
          delay_filter_length 10
          egressLatency 0
          ingressLatency 0
          boundary_clock_jbod 0
          #
          # Clock description
          #
          productDescription ;;
          revisionData ;;
          manufacturerIdentity 00:00:00
          userDescription ;
          timeSource 0xA0
        ptpSchedulingPolicy: SCHED_OTHER (8)
        ptpSchedulingPriority: 65 (9)
      ptpClockThreshold: (10)
        holdOverTimeout: 5
        maxOffsetThreshold: 100
        minOffsetThreshold: -100
      recommend: (11)
      - profile: "profile1" (12)
        priority: 10 (13)
        match: (14)
        - nodeLabel: "node-role.kubernetes.io/worker" (15)
          nodeName: "compute-0.example.com" (16)
    1 The name of the PtpConfig CR.
    2 Specify an array of one or more profile objects.
    3 Specify a unique name for the profile object.
    4 Specify the network interface to be used by the ptp4l service, for example ens787f1.
    5 Specify system config options for the ptp4l service, for example -2 to select the IEEE 802.3 network transport. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended. Append --summary_interval -4 to use PTP fast events with this interface.
    6 Specify system config options for the phc2sys service, for example -a -r -n 24. If this field is empty the PTP Operator does not start the phc2sys service.
    7 Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty.
    8 Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER. Use SCHED_FIFO on systems that support FIFO scheduling.
    9 Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes. Required if SCHED_FIFO is set for ptpSchedulingPolicy.
    10 Optional. If ptpClockThreshold stanza is not present, default values are used for ptpClockThreshold fields. Stanza shows default ptpClockThreshold values.
    11 Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes.
    12 Specify the profile object name defined in the profile section.
    13 Specify the priority with an integer value between 0 and 99. A larger number gets lower priority, so a priority of 99 is lower than a priority of 10. If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority is applied to that node.
    14 Specify match rules with nodeLabel or nodeName.
    15 Specify nodeLabel with the key of node.Labels from the node object by using the oc get nodes --show-labels command.
    16 Specify nodeName with node.Name from the node object by using the oc get nodes command.
  2. Create the PtpConfig CR by running the following command:

    $ oc create -f ordinary-clock-ptp-config.yaml
Verification steps
  1. Check that the PtpConfig profile is applied to the node.

    1. Get the list of pods in the openshift-ptp namespace by running the following command:

      $ oc get pods -n openshift-ptp -o wide
      Example output
      NAME                            READY   STATUS    RESTARTS   AGE   IP               NODE
      linuxptp-daemon-4xkbb           1/1     Running   0          43m   10.1.196.24      compute-0.example.com
      linuxptp-daemon-tdspf           1/1     Running   0          43m   10.1.196.25      compute-1.example.com
      ptp-operator-657bbb64c8-2f8sj   1/1     Running   0          43m   10.129.0.61      control-plane-1.example.com
    2. Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command:

      $ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container
      Example output
      I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
      I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
      I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
      I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1
      I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1
      I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s --summary_interval -4
      I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r
      I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------

Configuring linuxptp services as a boundary clock

You can configure the linuxptp services (ptp4l, phc2sys) as boundary clock by creating a PtpConfig custom resource (CR) object.

The following PtpConfig CR configures PTP fast events by setting values for ptp4lOpts, ptp4lConf and ptpClockThreshold.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Install the PTP Operator.

Procedure
  1. Create the following PtpConfig CR, and then save the YAML in the boundary-clock-ptp-config.yaml file.

    apiVersion: ptp.openshift.io/v1
    kind: PtpConfig
    metadata:
      name: boundary-clock-ptp-config (1)
      namespace: openshift-ptp
    spec:
      profile: (2)
      - name: "profile1" (3)
        interface: "" (4)
        ptp4lOpts: "-2 -s --summary_interval -4" (5)
        ptp4lConf: | (6)
          [ens1f0] (7)
          masterOnly 0
          [ens1f3] (8)
          masterOnly 1
          [global]
          #
          # Default Data Set
          #
          twoStepFlag                       1
          #slaveOnly                        1
          priority1                         128
          priority2                         128
          domainNumber                      24
          #utc_offset                       37
          clockClass                        248
          clockAccuracy                     0xFE
          offsetScaledLogVariance         0xFFFF
          free_running                      0
          freq_est_interval               1
          dscp_event                        0
          dscp_general                      0
          dataset_comparison              G.8275.x
          G.8275.defaultDS.localPriority  128
          #
          # Port Data Set
          #
          logAnnounceInterval          -3
          logSyncInterval                -4
          logMinDelayReqInterval       -4
          logMinPdelayReqInterval      -4
          announceReceiptTimeout       3
          syncReceiptTimeout           0
          delayAsymmetry                 0
          fault_reset_interval         4
          neighborPropDelayThresh      20000000
          masterOnly                     0
          G.8275.portDS.localPriority  128
          #
          # Run time options
          #
          assume_two_step              0
          logging_level                6
          path_trace_enabled         0
          follow_up_info               0
          hybrid_e2e                   0
          inhibit_multicast_service  0
          net_sync_monitor           0
          tc_spanning_tree           0
          tx_timestamp_timeout       10
          #was 1 (default !)
          unicast_listen          0
          unicast_master_table  0
          unicast_req_duration  3600
          use_syslog              1
          verbose                   0
          summary_interval      -4
          kernel_leap             1
          check_fup_sync          0
          #
          # Servo Options
          #
          pi_proportional_const     0.0
          pi_integral_const         0.0
          pi_proportional_scale     0.0
          pi_proportional_exponent  -0.3
          pi_proportional_norm_max  0.7
          pi_integral_scale         0.0
          pi_integral_exponent      0.4
          pi_integral_norm_max      0.3
          step_threshold              0
          first_step_threshold      0.00002
          max_frequency               900000000
          clock_servo                 pi
          sanity_freq_limit         200000000
          ntpshm_segment              0
          #
          # Transport options
          #
          transportSpecific   0x0
          ptp_dst_mac          01:1B:19:00:00:00
          p2p_dst_mac          01:80:C2:00:00:0E
          udp_ttl                1
          udp6_scope           0x0E
          uds_address          /var/run/ptp4l
          #
          # Default interface options
          #
          clock_type             BC
          network_transport    UDPv4
          delay_mechanism        E2E
          time_stamping          hardware
          tsproc_mode            filter
          delay_filter           moving_median
          delay_filter_length  10
          egressLatency          0
          ingressLatency         0
          boundary_clock_jbod  1
          #
          # Clock description
          #
          productDescription    ;;
          revisionData            ;;
          manufacturerIdentity  00:00:00
          userDescription         ;
          timeSource              0xA0
        phc2sysOpts: "-a -r" (9)
        ptpSchedulingPolicy: SCHED_OTHER (10)
        ptpSchedulingPriority: 65 (11)
      ptpClockThreshold: (12)
        holdOverTimeout: 5
        maxOffsetThreshold: 100
        minOffsetThreshold: -100
      recommend: (13)
      - profile: "profile1" (14)
        priority: 10 (15)
        match: (16)
        - nodeLabel: "node-role.kubernetes.io/worker" (17)
          nodeName: "compute-0.example.com" (18)
    1 The name of the PtpConfig CR.
    2 Specify an array of one or more profile objects.
    3 Specify the name of a profile object which uniquely identifies a profile object.
    4 This field should remain empty for boundary clock.
    5 Specify system config options for the ptp4l service, for example -2. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended.
    6 Specify the needed configuration to start ptp4l as boundary clock. For example, ens1f0 synchronizes from a grandmaster clock and ens1f3 synchronizes connected devices.
    7 The interface that receives the synchronization clock.
    8 The interface that synchronizes downstream connected devices.
    9 Specify system config options for the phc2sys service, for example -a -r. If this field is empty the PTP Operator does not start the phc2sys service.
    10 Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER. Use SCHED_FIFO on systems that support FIFO scheduling.
    11 Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes. Required if SCHED_FIFO is set for ptpSchedulingPolicy.
    12 Optional. If ptpClockThreshold stanza is not present, default values are used for ptpClockThreshold fields. Stanza shows default ptpClockThreshold values.
    13 Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes.
    14 Specify the profile object name defined in the profile section.
    15 Specify the priority with an integer value between 0 and 99. A larger number gets lower priority, so a priority of 99 is lower than a priority of 10. If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority is applied to that node.
    16 Specify match rules with nodeLabel or nodeName.
    17 Specify nodeLabel with the key of node.Labels from the node object by using the oc get nodes --show-labels command.
    18 Specify nodeName with node.Name from the node object by using the oc get nodes command.
  2. Create the CR by running the following command:

    $ oc create -f boundary-clock-ptp-config.yaml
Verification steps
  1. Check that the PtpConfig profile is applied to the node.

    1. Get the list of pods in the openshift-ptp namespace by running the following command:

      $ oc get pods -n openshift-ptp -o wide
      Example output
      NAME                            READY   STATUS    RESTARTS   AGE   IP               NODE
      linuxptp-daemon-4xkbb           1/1     Running   0          43m   10.1.196.24      compute-0.example.com
      linuxptp-daemon-tdspf           1/1     Running   0          43m   10.1.196.25      compute-1.example.com
      ptp-operator-657bbb64c8-2f8sj   1/1     Running   0          43m   10.129.0.61      control-plane-1.example.com
    2. Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command:

      $ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container
      Example output
      I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
      I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
      I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
      I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1
      I1115 09:41:17.117616 4143292 daemon.go:102] Interface:
      I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s --summary_interval -4
      I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r
      I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------

Configuring linuxptp services as an ordinary clock for Intel Columbiaville NIC

You can configure linuxptp services as a single ordinary clock for nodes with an Intel 800-Series Columbiaville NIC installed.

The following PtpConfig CR also configures PTP fast events by setting values for ptp4lOpts, ptp4lConf and ptpClockThreshold.

Prerequisites
  • Install one or more Intel 800-Series Columbiaville NICs in your cluster bare-metal hosts.

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure
  1. Create the PtpConfig CR for nodes with matching role.

    1. Save the following YAML in the cvl-ptp-ordinary-clock.yaml file:

      apiVersion: ptp.openshift.io/v1
      kind: PtpConfig
      metadata:
        name: slave
        namespace: openshift-ptp
      spec:
        profile:
        - name: "slave"
          interface: <interface_name> (1)
          ptp4lOpts: "-2 --summary_interval -4" (2)
          phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16"
          ptpSchedulingPolicy: "SCHED_FIFO" (3)
          ptpSchedulingPriority: 65 (4)
          ptp4lConf: |
            [global]
            #
            # Default Data Set
            #
            twoStepFlag 1
            slaveOnly 0
            priority1 128
            priority2 128
            domainNumber 24
            #utc_offset 37
            clockClass 248
            clockAccuracy 0xFE
            offsetScaledLogVariance 0xFFFF
            free_running 0
            freq_est_interval 1
            dscp_event 0
            dscp_general 0
            dataset_comparison ieee1588
            G.8275.defaultDS.localPriority 128
            #
            # Port Data Set
            #
            logAnnounceInterval -3
            logSyncInterval -4
            logMinDelayReqInterval -4
            logMinPdelayReqInterval -4
            announceReceiptTimeout 3
            syncReceiptTimeout 0
            delayAsymmetry 0
            fault_reset_interval -128
            neighborPropDelayThresh 20000000
            masterOnly 0
            G.8275.portDS.localPriority 128
            #
            # Run time options
            #
            assume_two_step 0
            logging_level 6
            path_trace_enabled 0
            follow_up_info 0
            hybrid_e2e 0
            inhibit_multicast_service 0
            net_sync_monitor 0
            tc_spanning_tree 0
            tx_timestamp_timeout 10
            unicast_listen 0
            unicast_master_table 0
            unicast_req_duration 3600
            use_syslog 1
            verbose 0
            summary_interval 0
            kernel_leap 1
            check_fup_sync 0
            #
            # Servo Options
            #
            pi_proportional_const 0.0
            pi_integral_const 0.0
            pi_proportional_scale 0.0
            pi_proportional_exponent -0.3
            pi_proportional_norm_max 0.7
            pi_integral_scale 0.0
            pi_integral_exponent 0.4
            pi_integral_norm_max 0.3
            step_threshold 0.0
            first_step_threshold 0.00002
            max_frequency 900000000
            clock_servo pi
            sanity_freq_limit 200000000
            ntpshm_segment 0
            #
            # Transport options
            #
            transportSpecific 0x0
            ptp_dst_mac 01:1B:19:00:00:00
            p2p_dst_mac 01:80:C2:00:00:0E
            udp_ttl 1
            udp6_scope 0x0E
            uds_address /var/run/ptp4l
            #
            # Default interface options
            #
            clock_type OC
            network_transport UDPv4
            delay_mechanism E2E
            time_stamping hardware
            tsproc_mode filter
            delay_filter moving_median
            delay_filter_length 10
            egressLatency 0
            ingressLatency 0
            boundary_clock_jbod 0
            #
            # Clock description
            #
            productDescription ;;
            revisionData ;;
            manufacturerIdentity 00:00:00
            userDescription ;
            timeSource 0xA0
        ptpClockThreshold: (6)
          holdOverTimeout: 5
          maxOffsetThreshold: 100
          minOffsetThreshold: -100
        recommend:
        - profile: "slave"
          priority: 4
          match:
          - nodeLabel: "node-role.kubernetes.io/<mcp-role>" (5)
      1 Name of the network interface that connects to the upstream PTP leader clock, for example, ens787f1.
      2 Set --summary_interval to -4 to use PTP fast events.
      3 Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER. Use SCHED_FIFO on systems that support FIFO scheduling.
      4 Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes. Required if SCHED_FIFO is set for ptpSchedulingPolicy.
      5 MachineConfig node role that corresponds to the cluster nodes where the Columbiaville NICs are installed, for example, worker-cnf.
      6 Optional. If ptpClockThreshold stanza is not present, default values are used for ptpClockThreshold fields. Stanza shows default ptpClockThreshold values.
    2. Create the PtpConfig CR:

      $ oc create -f cvl-ptp-ordinary-clock.yaml
Verification steps
  1. Check that the PtpConfig profile is applied to the node.

    1. Get the list of pods in the openshift-ptp namespace by running the following command:

      $ oc get pods -n openshift-ptp -o wide
      Example output
      NAME                            READY   STATUS    RESTARTS   AGE   IP               NODE
      linuxptp-daemon-4xkbb           1/1     Running   0          43m   10.1.196.24      compute-0.example.com
      linuxptp-daemon-tdspf           1/1     Running   0          43m   10.1.196.25      compute-1.example.com
      ptp-operator-657bbb64c8-2f8sj   1/1     Running   0          43m   10.129.0.61      control-plane-1.example.com
    2. Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command:

      $ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container
      Example output
      I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
      I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
      I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
      I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1
      I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1
      I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s --summary_interval -4
      I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24
      I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------

Configuring FIFO priority scheduling for PTP hardware

In telco or other deployment configurations that require low latency performance, PTP daemon threads run in a constrained CPU footprint alongside the rest of the infrastructure components. By default, PTP threads run with the SCHED_OTHER policy. Under high load, these threads might not get the scheduling latency they require for error-free operation.

To mitigate against potential scheduling latency errors, you can configure the PTP Operator linuxptp services to allow threads to run with a SCHED_FIFO policy. If SCHED_FIFO is set for a PtpConfig CR, then ptp4l and phc2sys will run in the parent container under chrt with a priority set by the ptpSchedulingPriority field of the PtpConfig CR.

Setting ptpSchedulingPolicy is optional, and is only required if you are experiencing latency errors.

Procedure
  1. Edit the PtpConfig CR profile:

    $ oc edit PtpConfig -n openshift-ptp
  2. Change the ptpSchedulingPolicy and ptpSchedulingPriority fields:

    apiVersion: ptp.openshift.io/v1
    kind: PtpConfig
    metadata:
      name