×

Overview

Red Hat OpenShift Container Platform provides developers and IT organizations with a cloud application platform for deploying new applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, Ruby, and PHP.

Built on Red Hat Enterprise Linux and Kubernetes, OpenShift Container Platform provides a secure and scalable multi-tenant operating system for today’s enterprise-class applications, while providing integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.

About This Release

Red Hat OpenShift Container Platform version 3.9 (RHBA-2018:0489) is now available. This release is based on OpenShift Origin 3.9. New features, changes, bug fixes, and known issues that pertain to OpenShift Container Platform 3.9 are included in this topic.

To better synchronize versions of OpenShift Container Platform with Kubernetes, Red Hat did not publicly release OpenShift Container Platform 3.8 and, instead, is releasing OpenShift Container Platform 3.9 directly after version 3.7. See Installation for information on how this impacts installation and upgrade processes.

OpenShift Container Platform 3.9 is supported on RHEL 7.3, 7.4, and 7.5 with the latest packages from Extras, including Docker 1.13. It is also supported on Atomic Host 7.4.5 and newer. The docker-latest package is now deprecated.

TLSV1.2 is the only supported security version in OpenShift Container Platform version 3.4 and later. You must update if you are using TLSV1.0 or TLSV1.1.

For initial installations, see the Installing a Cluster topics in the Installation and Configuration documentation.

To upgrade to this release from a previous version, see the Upgrading Clusters topic.

New Features and Enhancements

This release adds improvements related to the following components and concepts.

Container Orchestration

Soft Image Pruning

Now, when pruning images, you do not have to remove the actual image, just update etcd storage.

It is safer to run --keep-tag-revisions and --keep-younger-than. After this is run, administrators can choose to run hard prune (which is safe to run as long as the registry is put in read-only mode).

Red Hat CloudForms Management Engine 4.6 Container Management

The installation playbooks in OpenShift Container Platform 3.9 have been updated to support Red Hat CloudForms Management Engine (CFME) 4.6, which is now currently available. See the new Deploying Red Hat CloudForms on OpenShift Container Platform topics for further information.

In addition, this release includes the following new features and updates:

  • OpenShift Container Platform template provisioning

  • Offline OpenScapScans

  • Alert management: You can choose Prometheus (currently in Technology Preview) and use it in CloudForms.

  • Reporting enhancements

  • Provider updates

  • Chargeback enhancements

  • UX enhancememts

CRI-O v1.9

CRI-O is a lightweight, native Kubernetes container runtime interface. By design, it provides only the runtime capabilities needed by the kubelet. CRI-O is designed to be part of Kubernetes and evolve in lock-step with the platform.

CRI-O brings:

  • A minimal and secure architecture.

  • Excellent scale and performance.

  • The ability to run any Open Container Initiative (OCI) or docker image.

  • Familiar operational tooling and commands.

CRI-O

To install and run CRI-O alongside docker, set the following in the [OSEv3:vars] section Ansible inventory file during cluster installation:

openshift_use_crio=true
openshift_crio_use_rpm=true (1)
1 CRI-O can only be installed as an RPM. The previously-available system container for CRI-O has been dropped from Technology Preview as of OpenShift Container Platform 3.9.

The atomic-openshift-node service must be RPM- or system container-based when using CRI-O; it cannot be docker container-based. The installer protects against using CRI-O with docker container nodes and will halt installation if detected.

When CRI-O use is enabled, it is installed alongside docker, which currently is required to perform build and push operations to the registry. Over time, temporary docker builds can accumulate on nodes. You can optionally set the following parameter to enable garbage collection. You must configure the node selector so that the garbage collector runs only on nodes where docker is configured for overlay2 storage.

openshift_crio_enable_docker_gc=true
openshift_crio_docker_gc_node_selector={'runtime': 'cri-o'}

For example, the above would ensure it is only run on nodes with the runtime: cri-o label. This can be helpful if you are running CRI-O only on some nodes, and others are only running docker.

See the upstream documentation for more information on CRI-O.

Storage

PV Resize

You can expand persistent volume claims online from OpenShift Container Platform for CNS glusterFS, Cinder, and GCE PD.

  1. Create a storage class with allowVolumeExpansion=true.

  2. The PVC uses the storage class and submits a claim.

  3. The PVC specifies a new increased size.

  4. The underlying PV is resized.

End-to-end Online Expansion and Resize for Containerized GlusterFS PV

You can expand persistent volume claims online from OpenShift Container Platform for CNS glusterFS volumes.

This can be done online from OpenShift Container Platform. Previously, this was only available from the Heketi CLI. You edit the PVC with the new size, triggering a PV resize. This is fully qualified for glusterFs backed PVs. Gluster-block PV resize was added with RHEL 7.5.

  1. Add allowVolumeExpansion=true to the storage class.

  2. Run:

    $ oc edit pvc claim-name
  3. Edit the spec.resources.requests.storage field with the new value.

Container Native Storage GlusterFS PV Consumption Metrics Available from OpenShift Container Platform

Container Native Storage GlusterFS is extended to provide volume metrics (including consumption) through Prometheus or Query.

Metrics are available from the PVC endpoint. This adds visibility to what is being allocated and what is being consumed. Previously, you could only see allocated size of the PVs. Now, you know how much is really consumed so, if needed, you can expand it before it runs out of space. This also allows administrators to do billing based on consumption, if needed.

Examples of added metrics include:

  • kubelet_volume_stats_capacity_bytes

  • kubelet_volume_stats_inodes

  • kubelet_volume_stats_inodes_free

  • kubelet_volume_stats_inodes_used

  • kubelet_volume_stats_used_bytes

Automated CNS Deployment with OpenShift Container Platform Advanced Installation

In the OpenShift Container Platform advanced installer, the CNS block provisioner deployment is fixed and the CNS Un-install Playbook is added. This resolves the issue of CNS block deployment with OpenShift Container Platform and also provides a way to uninstall a failed installation of CNS.

CNS storage device details are added to the installer’s inventory file. The advanced installer manages configuration and deployment of CNS, file and block provisioners, registry, and ready-to-use PVs.

Tenant-driven Storage Snapshotting (Technology Preview)

Tenant-driven storage snapshotting is currently in Technology Preview and not for production workloads.

Tenants now have the ability to leverage the underlying storage technology backing the persistent volume (PV) assigned to them to make a snapshot of their application data. Tenants can also now restore a given snapshot from the past to their current application.

An external provisioner is used to access the EBS, GCE pDisk, and HostPath, and Cinder snapshotting API. This Technology Preview feature has tested EBS and HostPath. The tenant must stop the pods and start them manually.

  1. The administrator runs an external provisioner for the cluster. These are images from the Red Hat Container Catalog.

  2. The tenant made a PVC and owns a PV from one of the supported storage solutions.The administrator must create a new StorageClass in the cluster with:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: snapshot-promoter
    provisioner: volumesnapshot.external-storage.k8s.io/snapshot-promoter
  3. The tenant can create a snapshot of a PVC named gce-pvc and the resulting snapshot will be called snapshot-demo.

    $ oc create -f snapshot.yaml
    
    apiVersion: volumesnapshot.external-storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: snapshot-demo
      namespace: myns
    spec:
      persistentVolumeClaimName: gce-pvc
  4. Now, they can restore their pod to that snapshot.

    $ oc create -f restore.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: snapshot-pv-provisioning-demo
      annotations:
        snapshot.alpha.kubernetes.io/snapshot: snapshot-demo
    spec:
      storageClassName: snapshot-promoter

Scale

Cluster Limits

Updated guidance around Cluster Limits for OpenShift Container Platform 3.9 is now available.

Device Plug-ins (Technology Preview)

This is a feature currently in Technology Preview and not for production workloads.

Device plug-ins allow you to use a particular device type (GPU, InfiniBand, or other similar computing resources that require vendor-specific initialization and setup) in your OpenShift Container Platform pod without needing to write custom code. The device plug-in provides a consistent and portable solution to consume hardware devices across clusters. The device plug-in provides support for these devices through an extension mechanism, which makes these devices available to containers, provides health checks of these devices, and securely shares them.

A device plug-in is a gRPC service running on the nodes (external to atomic-openshift-node.service) that is responsible for managing specific hardware resources.

See the Developer Guide for further conceptual information about Device Plug-ins.

CPU Manager (Technology Preview)

CPU Manager is a feature currently in Technology Preview and not for production workloads.

CPU Manager manages groups of CPUs and constrains workloads to specific CPUs.

CPU Manager is useful for workloads that have some of these attributes:

  • Require as much CPU time as possible.

  • Are sensitive to processor cache misses.

  • Are low-latency network applications.

  • Coordinate with other processes and benefit from sharing a single processor cache.

See Using CPU Manager for more information.

Device Manager (Technology Preview)

Device Manager is a feature currently in Technology Preview and not for production workloads.

Some users want to set resource limits for hardware devices within their pod definition and have the scheduler find the node in the cluster with those resources. While at the same time, Kubernetes needed a way for hardware vendors to advertise their resources to the kubelet without forcing them to change core code within Kubernetes

The kubelet now houses a device manager that is extensible through plug-ins. You load the driver support at the node level. Then, you or the vendor writes a plug-in that listens for requests to stop/start/attach/assign the requested hardware resources seen by the drivers. This plug-in is deployed to all the nodes via a daemonSet.

See Using Device Manager for more information.

Huge Pages (Technology Preview)

Huge pages is a feature currently in Technology Preview and not for production workloads.

Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.

A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. In order to use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.

In OpenShift Container Platform, applications in a pod can allocate and consume pre-allocated huge pages.

See Managing Huge Pages for more information.

Networking

Semi-automatic Namespace-wide Egress IP

All outgoing external connections from a project share a single, fixed source IP address and send all traffic via that IP, so that external firewalls can recognize the application associated with a packet.

It is semi-automatic in that in the first half of implementing the automatic namespace-wide egress IP feature, it implements the "traffic" side. Namespaces with automatic egress IPs will send all traffic via that IP. However, it does not implement the "management" side. Nothing automatically assigns egress IPs to nodes yet. The administrator must do that manually.

See Managing Networking for more information.

Support Our Own HAProxy RPM for Consumption by the Router

Route configuration changes and process upgrades performed under heaving load have typically required a stop and start sequence of certain services, causing temporary outages.

In OpenShift Container Platform 3.9, HAProxy 1.8 sees no difference between updates and upgrades; a new process is used with a new configuration, and the listening socket’s file descriptor is transferred from the old to the new process so the connection is never closed. The change is seamless, and enables our ability to do things, like HTTP/2, in the future.

Master

StatefulSets, DaemonSets, and Deployments Now Supported

In OpenShift Container Platform, statefulsets, daemonsets, and deployments are now stable, supported, and out of Technology Preview.

Central Audit Capability

Provides auditing of items that administrators would like to see, including:

  • The event timestamp.

  • The activity that generated the entry.

  • The API endpoint that was called.

  • The HTTP output.

  • The item changed due to an activity, with details of the change.

  • The user name of the user that initiated an activity.

  • The name of the namespace the event occurred in, where possible.

  • The status of the event, either success or failure.

Provides auditing of items that administrators would like to trace, including:

  • User login and logout from (including session timeout) the web interface, including unauthorized access attempts.

  • Account creation, modification, or removal.

  • Account role or policy assignment or de-assignment.

  • Scaling of pods.

  • Creation of new project or application.

  • Creation of routes and services.

  • Triggers of builds and/or pipelines.

  • Addition or removal or claim of persistent volumes.

Set up auditing in the master-config file, and restart the master-config service:

auditConfig:
  auditFilePath: "/var/log/audit-ocp.log"
  enabled: true
  maximumFileRetentionDays: 10
  maximumFileSizeMegabytes: 10
  maximumRetainedFiles: 10
  logFormat: json
  policyConfiguration: null
  policyFile: /etc/origin/master/audit-policy.yaml
  webHookKubeConfig: ""
  webHookMode:

Example log output:

{"kind":"Event","apiVersion":"audit.k8s.io/v1beta1","metadata":{"creationTimestamp":"2017-09-29T09:46:39Z"},"level":"Metadata","timestamp":"2017-09-29T09:46:39Z","auditID":"72e66a64-c3e5-4201-9a62-6512a220365e","stage":"ResponseComplete","requestURI":"/api/v1/securitycontextconstraints","verb":"create","user":{"username":"system:admin","groups":["system:cluster-admins","system:authenticated"]},"sourceIPs":["10.8.241.75"],"objectRef":{"resource":"securitycontextconstraints","name":"scc-lg","apiVersion":"/v1"},"responseStatus":{"metadata":{},"code":201}}

Add Support for Deployments to oc status

The oc status command provides an overview of the current project. This provides similar output for upstream deployments as can be seen for downstream DeploymentConfigs, with a nested deployment set:

$ oc status
In project My Project (myproject) on server https://127.0.0.1:8443

svc/ruby-deploy - 172.30.174.234:8080
  deployment/ruby-deploy deploys istag/ruby-deploy:latest <-
    bc/ruby-deploy source builds https://github.com/sclorg/ruby-ex.git on istag/ruby-22-centos7:latest
      build #1 failed 5 hours ago - bbb6701: Merge pull request #18 from durandom/master (Joe User <joeuser@users.noreply.github.com>)
    deployment #2 running for 4 hours - 0/1 pods (warning: 53 restarts)
    deployment #1 deployed 5 hours ago

Compare this to the output from OpenShift Container Platform 3.7:

$ oc status
In project dc-test on server https://127.0.0.1:8443

svc/ruby-deploy - 172.30.231.16:8080
  pod/ruby-deploy-5c7cc559cc-pvq9l runs test

Dynamic Admission Controller Follow-up (Technology Preview)

Dynamic Admission Controller Follow-up is a feature currently in Technology Preview and not for production workloads.

An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object, but after the request is authenticated and authorized. Example use cases include mutation of pod resources and security response.

See Custom Admission Controllers for more information.

Feature Gates

Platform administrators now have the ability to turn off specific features to the entire platform. This assists in the control of access to alpha, beta, or Technology Preview features in production clusters.

Feature gates use a key=value pair in the master and kubelet configuration files that describe the feature you want to block.

Control Plane: master-config.yaml
kubernetesMasterConfig:
  apiServerArguments:
    feature-gates:
    - CPUManager=true
kubelet: node-config.yaml
kubeletArguments:
  feature-gates:
  - DevicePlugin=true

Installation

Improved Playbook Performance

OpenShift Container Platform 3.9 introduces significant refactoring and restructuring of the playbooks to improve performance. This includes:

  • Restructured playbooks to push all fact-gathering and common dependencies up into the initialization plays so they are only called once rather than each time a role needs access to their computed values.

  • Refactored playbooks to limit the hosts they touch to only those that are truly relevant to the playbook.

Quick Installation (Deprecated)

Quick Installation is now deprecated in OpenShift Container Platform 3.9 and will be completely removed in a future release.

Quick installation will only be capable of installing 3.9. It will not be able to upgrade from 3.7 or 3.8 to 3.9.

Automated 3.7 to 3.9 Control Plane Upgrade

The installer automatically handles stepping the control plane from 3.7 to 3.8 to 3.9 and node upgrade from 3.7 to 3.9.

Control plane components (API, controllers, and nodes on control plane hosts) are upgraded seamlessly from 3.7 to 3.8 to 3.9. Data migration happens pre- and post- OpenShift Container Platform 3.8 and 3.9 control plane upgrades. Other control plane components (router, registry, service catalog, and brokers) are upgraded from OpenShift Container Platform 3.7 to 3.9. Nodes (node, docker, ovs) are upgraded directly from OpenShift Container Platform 3.7 to 3.9 with only one drain of nodes. OpenShift Container Platform 3.7 nodes operate indefinitely against 3.8 masters should the upgrade process need to pause in this state. Logging and metrics are updated from OpenShift Container Platform 3.7 to 3.9.

It is recommended that you upgrade the control plane and nodes independently. You can still perform the upgrade through an all-in-one playbook, but rollback is more difficult. Playbooks do not allow for a clean installation of OpenShift Container Platform 3.8.

See Upgrading Clusters for more information.

Metrics and Logging

Journald for System Logs and JSON File for Container Logs

Docker log driver is set to json-file as the default for all nodes. Docker log-driver can be set to journal, but there is no log rate throttling with journal driver. So, there is always a risk for denial-of-service attacks from rogue containers.

Fluentd will automatically determine which log driver (journald or json-file) the container runtime is using. Fluentd will now always read logs from journald and also /var/log/containers (if log-driver is set to json-file). Fluentd will no longer read from /var/log/messages.

See Aggregating Container Logs for more information.

Prometheus (Technology Preview)

Prometheus remains in Technology Preview and is not for production workloads. Prometheus, AlertManager, and AlertBuffer versions are now updated and node-exporter is now included:

  • prometheus 2.1.0

  • Alertmanager 0.14.0

  • AlertBuffer 0.2

  • node_exporter 0.15.2

You can deploy Prometheus on an OpenShift Container Platform cluster, collect Kubernetes and infrastructure metrics, and get alerts. You can see and query metrics and alerts on the Prometheus web dashboard. Alternatively, you can bring your own Grafana and hook it up to Prometheus.

See Prometheus on OpenShift for more information.

Developer Experience

Jenkins Memory Usage Improvements

Previously, Jenkins worker pods would often consume too much or too little memory. Now, a startup script intelligently looks at pod limits and environment variables are appropriately set to ensure limits are respected for spawned JVMs.

CLI Plug-ins (Technology Preview)

CLI plug-ins are currently in Technology Preview and not for production workloads.

Usually called plug-ins or binary extensions, this feature allows you to extend the default set of oc commands available and, therefore, allows you to perform new tasks.

See Extending the CLI for information on how to install and write extensions for the CLI.

Ability to Specify Default Tolerations via the buildconfig Defaulter

Previously, there was not a way to set a default toleration on build pods so they could be placed on build-specific nodes. The build defaulter is now updated to allow the specification of a toleration value, which is applied to the build pod upon creation.

Default Hard Eviction Thresholds

OpenShift Container Platform uses the following default configuration for eviction-hard.

...
kubeletArguments:
  eviction-hard:
  - memory.available<100Mi
  - nodefs.available<10%
  - nodefs.inodesFree<5%
  - imagefs.available<15%
...

See Handling Out of Resource Errors for more information.

Web Console

Catalog from within Project View

Quickly get to the catalog from within a project by clicking Catalog in the left navigation.

Catalog tab

Quickly Search the Catalog from within Project View

To quickly find services from within project view, type in your search criteria.

Search the catalog

Select Preferred Home Page

You can now jump straight to certain pages after login. Access the menu from the account dropdown, choose your option, then log out, then log back in.