×

Overview

Red Hat OpenShift Container Platform provides developers and IT organizations with a cloud application platform for deploying new applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, Ruby, and PHP.

Built on Red Hat Enterprise Linux and Kubernetes, OpenShift Container Platform provides a secure and scalable multi-tenant operating system for today’s enterprise-class applications, while providing integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.

About This Release

Red Hat OpenShift Container Platform version 3.7 (RHSA-2017:3188) is now available. This release is based on OpenShift Origin 3.7. New features, changes, bug fixes, and known issues that pertain to OpenShift Container Platform 3.7 are included in this topic.

OpenShift Container Platform 3.7 is supported on RHEL 7.3, 7.4.2, 7.5, and Atomic Host 7.4.2 and newer with the latest packages from Extras, including Docker 1.12.

TLSV1.2 is the only supported security version in OpenShift Container Platform version 3.4 and later. You must update if you are using TLSV1.0 or TLSV1.1.

For initial installations, see the Installing a Cluster topics in the Installation and Configuration documentation.

To upgrade to this release from a previous version, see the Upgrading a Cluster topics in the Installation and Configuration documentation.

New Features and Enhancements

This release adds improvements related to the following components and concepts.

Container Orchestration

Kubernetes Upstream

Many core features Google announced in June for Kubernetes 1.7 were the result of OpenShift engineering. Red Hat continues to influence the product in the areas of storage, networking, resource management, authentication and authorization, multi-tenancy, security, service deployments, templating, and controller functionality.

CRI-O (Technology Preview)

This feature is currently in Technology Preview and not for production workloads. CRI-O with builds will not yet work.

CRI-O v1.0 is a lightweight, native Kubernetes container runtime interface. By design, it provides only the runtime capabilities needed by the kubelet. CRI-O is designed to be part of Kubernetes and evolve in lock-step with the platform.

CRI-O brings:

  • A minimal and secure architecture.

  • Excellent scale and performance.

  • The ability to run any Open Container Initiative (OCI) or docker image.

  • Familiar operational tooling and commands.

CRI-O

To install and run CRI-O alongside docker, set the following in the [OSEv3:vars] section Ansible inventory file during cluster installation:

openshift_use_crio=true

This setting pulls the openshift3/cri-o system container image from the Red Hat Registry by default. If you want to use an alternative CRI-O system container image from another registry, you can also override the default using the following variable:

openshift_crio_systemcontainer_image_override=<registry>/<repo>/<image>:<tag>

The atomic-openshift-node service must be RPM- or system container-based when using CRI-O; it cannot be docker container-based. The installer protects again using CRI-O with docker container nodes and will halt installation if detected.

When CRI-O use is enabled, it is installed alongside docker, which currently is required to perform build and push operations to the registry. Over time, temporary docker builds can accumulate on nodes. You can optionally set the following parameter to enable garbage collection. You must configure the node selector so that the garbage collector runs only on nodes where docker is configured for overlay2 storage.

openshift_crio_enable_docker_gc=true
openshift_crio_docker_gc_node_selector={'runtime': 'cri-o'}

For example, the above would ensure it is only run on nodes with the runtime: cri-o label. This can be helpful if you are running CRI-O only on some nodes, and others are only running docker.

See the upstream documentation for more information on CRI-O.

Cluster-wide Tolerations and Per-namespace Tolerations to Control Pod Placement

In a multi-tenant environment, you want to leverage administration controllers to help define rules that can help govern a cluster, should a tenant not set a toleration for placement.

The following is offered to administrators where the namespace setting will override the cluster setting:

  • Cluster-wide and per-namespace default toleration for pods.

  • Cluster-wide and per-namespace white-listing of toleration for pods.

Cluster-wide Off Example
admissionConfig:
  pluginConfig:
    PodTolerationRestriction:
      configuration:
        kind: DefaultAdmissionConfig
        apiVersion: v1
        disable: true
Cluster-wide On Example
admissionConfig:
  pluginConfig:
    PodTolerationRestriction:
      configuration:
        apiVersion: podtolerationrestriction.admission.k8s.io/v1alpha1
        kind: Configuration
        default:
         - key: key3
           value: value3
        whitelist:
         - key: key1
           value: value1
         - key: key3
           value: value3
Namespace-specific Example
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/description: ""
    openshift.io/display-name: ""
    openshift.io/sa.scc.mcs: s0:c8,c7
    openshift.io/sa.scc.supplemental-groups: 1000070000/10000
    openshift.io/sa.scc.uid-range: 1000070000/10000
    scheduler.alpha.kubernetes.io/defaultTolerations: '[ { "key": "key1", "value":"value1" }]'
    scheduler.alpha.kubernetes.io/tolerationsWhitelist: '[ { "key": "key1", "value":
      "value1" }, { "key": "key2", "value": "value2" } ]'
  generateName: dma-
spec:
  finalizers:
  - openshift.io/origin
  - kubernetes

Security

Documented Private and Public Key Configurations and Crypto Levels

While OpenShift Container Platform is a secured by default implementation of Kubernetes, there is now documentation on what security protocols and ciphers are used.

OpenShift Container Platform leverages Transport Layer Security (TLS) cipher suites, JSON Web Algorithms (JWA) crypto algorithms, and offers external libraries such as The Generic Security Service Application Program Interface (GSSAPI) and libgpgme.

Private and public key configurations and Crypto levels are now documented for OpenShift Container Platform.

Node Authorizer and Node Restriction Admission Plug-in

Pods can no longer try to gain information from secrets, configuration maps, PV, PVC, or API objects from other nodes.

Node authorizer governs what APIs a kubelet can perform. Spanning read-, write-, and auth-related operations. In order for the admission controller to know the identity of the node to enforce the rules, nodes are provisioned with credentials that identify them with the user name system:node:<nodename> and group system:nodes.

These enforcements are in place by default on all new installations of OpenShift Container Platform 3.7. For upgrades from OpenShift Container Platform 3.6, they are not in place due to the system:nodes RBAC being granted from OCP 3.6. To turn the enforcements on, run:

# oc adm policy remove-cluster-role-from-group system:node system:nodes

Advanced Auditing

With Advanced Auditing, administrators are now exposed to more information from the API call within the audit trail. This provides a deeper traceability of what is occurring across the cluster. We also capture all login events at the default logging level and modifications to role binds and SCC.

OpenShift Container Platform now has an audit policyFile or policyConfiguration where administrators can filter in on what they want to capture.

See Advanced Audit for more information.

Complete Upstreaming of RBAC, Then Downstreaming it Back into OpenShift

The rolebinding and RBAC experience is now the same across all Kubernetes distributions.

Administrators do not have to do anything for this migration to occur. The upgrade process to OpenShift Container Platform 3.7 offers a seamless experience. Now, the user experience is consistent with upstream.

A role can be defined within a namespace with a Role, or cluster-wide with a ClusterRole.

A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users, or service accounts. A role binding grants the permissions defined in a role.

Issue Longer-lived API Tokens to OAuth Clients

Administrators now have the ability to set different token timeouts for the different ways users connect to OpenShift Container Platform (for example, via the oc command line, from a GitHub authentication, or from the web console).

Administrators can edit oauthclients and set the accessTokenMaxAgeSeconds to a time value in seconds that meets their needs.

There are three possible OAuth client types:

  1. openshift-web-console - The client used to request tokens for the OpenShift web console.

  2. openshift-browser-client - The client used to request tokens at /oauth/token/request with a user-agent that can handle interactive logins, such as using Auth from GitHub, Google Authenticator, and so on.

  3. openshift-challenging-client - The client used to request tokens with a user-agent that can handle WWW-Authenticate challenges, such as the oc command line.

    • When accessTokenMaxAgeSeconds is set to 0, tokens do not expire.

    • When left blank, OpenShift Container Platform uses the definition in master-config.

    • Edit the client of interest via:

      # oc edit oauthclients openshift-browser-client
    • Set accessTokenMaxAgeSeconds to 600.

    • Check the setting via:

      # oc get oauthaccesstoken

See Other API Objects for more information.

Security Context Constraints Now Supports flexVolume

flexVolumes allow users to integrate with new APIs easily by being able to mount in the items needed for integration. For example, the ability to bind mount in certain files without overwriting whole directories to integrate with Kerberos.

Administrators are now able to grant access to users to use specific flexVolume driver names. Previously, the only way administrators could restrict flexVolumes was by setting them as on or off.

Storage

Local Storage Persistent Volumes (Technology Preview)

Local storage persistent volumes is a feature currently in Technology Preview and not for production workloads.

Local persistent volumes (PVs) now offer the ability to allow tenants to request storage that is local to a node through the regular persistent volume claim (PVC) process without needing to know the node. Local storage is commonly used in data store applications.

The administrator needs to create the local storage on the nodes, mount them under directories, and then manually create the persistent volume (PV). Alternatively, they can use an external provisioner and feed it the node configuration via configMaps.

Example persistent volume named example-local-pv that some tenants can now claim:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-local-pv
  annotations:
    "volume.alpha.kubernetes.io/node-affinity": '{
      "requiredDuringSchedulingIgnoredDuringExecution": {
        "nodeSelectorTerms": [
          { "matchExpressions": [
            { "key": "kubernetes.io/hostname",
              "operator": "In",
              "values": ["my-node"]
            }
          ]}
         ]}
        }'
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/disks/vol1

Tenant-driven Storage Snapshotting (Technology Preview)

Tenant-driven storage snapshotting is currently in Technology Preview and not for production workloads.

Tenants now have the ability to leverage the underlying storage technology backing the persistent volume (PV) assigned to them to make a snapshot of their application data. Tenants can also now restore a given snapshot from the past to their current application.

An external provisioner is used to access the EBS, GCE pDisk, and HostPath, and Cinder snapshotting API. This Technology Preview feature has tested EBS and HostPath. The tenant must stop the pods and start them manually.

  1. The administrator runs an external provisioner for the cluster. These are images from the Red Hat Container Catalog.

  2. The tenant made a PVC and owns a PV from one of the supported storage solutions.The administrator must create a new StorageClass in the cluster with:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: snapshot-promoter
    provisioner: volumesnapshot.external-storage.k8s.io/snapshot-promoter
  3. The tenant can create a snapshot of a PVC named gce-pvc and the resulting snapshot will be called snapshot-demo.

    $ oc create -f snapshot.yaml
    
    apiVersion: volumesnapshot.external-storage.k8s.io/v1
    kind: VolumeSnapshot
    metadata:
      name: snapshot-demo
      namespace: myns
    spec:
      persistentVolumeClaimName: gce-pvc
  4. Now, they can restore their pod to that snapshot.

    $ oc create -f restore.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: snapshot-pv-provisioning-demo
      annotations:
        snapshot.alpha.kubernetes.io/snapshot: snapshot-demo
    spec:
      storageClassName: snapshot-promoter

Storage Classes Get Zones

Public clouds are particular about not allowing storage to cross zones or regions, so tenants need an ability at times to specify a particular zone.

In OpenShift Container Platform 3.7, administrators can now leverage a zone’s definition within the StorageClass:

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: slow
provisioner: kubernetes.io/<provisioner>
parameters:
  type: pd-standard
  zones: zone1,zone2

Increased Persistent Volume Density Support by CNS

Container-native storage (CNS) on OpenShift Container Platform 3.7 now supports much higher persistent volume density (three times more) to support a large number of applications at scale. This is due to the introduction of brick-multiplexing support in GlusterFS.

Over 1,000 volumes in a 3-node cluster with 32 GB of RAM per node available to GlusterFS has been successfully tested. Also, 300 Block PVs are supported now on 3-node CNS.

CNS Multi-protocol (File, Block, and S3) Support for OpenShift

Container-native storage (CNS) is now extended support iSCSI and S3 back end for OpenShift Container Platform. Heketi is enhanced to support persistent volume (PV) expansion, volume option, and HA.

Block device-based RWO implementation is added to CNS to improve the performance of ElasticSearch, PostgreSQL, and so on. With OpenShift Container Platform 3.7, Elastic and Cassandra are fully supported.

CNS Full Support for Infrastructure Services

Container-native storage (CNS) now fully supports all OpenShift Container Platform infrastructure services: registry, logging, and metrics.

OpenShift Container Platform logging (with Elasticsearch) and OpenShift Container Platform metrics (with Cassandra) are fully supported on persistent volumes backed by CNS/CRS iSCSI block storage.

The OpenShift Container Platform registry is hosted on CNS/CRS by RWX persistent volumes, providing high availability and redundancy through Gluster architecture.

Logging and metrics were tested at scale with 1000+ pods.

Automated Container Native Storage Deployment with OpenShift Advanced Installation

OpenShift Container Platform 3.7 now includes an integrated and simplified installation of container-native storage (CNS) through the advanced installer. The advanced installer is enhanced for automated and integrated support for deployment of CNS including block provisioner, S3 provisioner, and files for correctly configured out-of-the-box OpenShift Container Platform and CNS. The CNS storage device details are added to the installer’s inventory file. The installer manages configuration and deployment of CNS, its dynamic provisioners, and other pertinent details.

Official FlexVolume Support for Non-storage Use Cases

There is now a supported interface to allow you to bind and mount in content from a running pod. FlexVolume is a script interface that runs on the kubelet and offers five main functions to help you mount in content such as device drivers, secrets, and certificates as bind mounts to the container from the host:

  • init - Initialize the volume driver.

  • attach - Attach the volume to the host.

  • mount - Mount the volume on the host. This is the part that makes the volume available to the host to mount it in /var/lib/kubelet.

  • unmount - Unmount the volume.

  • detach - Detach the volume from the host.

Scale

Cluster Limits

Updated guidance around Cluster Limits for OpenShift Container Platform 3.7 is now available.

Updated Tuned Profile Hierarchy

The Tuned Profile Hierarchy is updated as of 3.7.

Cluster Loader

Guidance regarding use of Cluster Loader is now available with the release of OpenShift Container Platform 3.7. Cluster Loader is a tool that deploys large numbers of various objects to a cluster, which creates user-defined cluster objects. Build, configure, and run Cluster Loader to measure performance metrics of your OpenShift Container Platform deployment at various cluster states.

Guidance on Overlay Graph Driver with SELinux

In OpenShift Container Platform 3.7, guidance about the benefits of using the Overlay Graph Driver with SELinux is now available.

Providing Storage to an etcd Node Using PCI Passthrough with OpenStack

Networking

Network Policy

Network Policy is now fully supported in OpenShift Container Platform 3.7.

Network Policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints. It provides fine-grained network namespace isolation using labels and port specifications.

Network Policy is available only if you use the ovs-networkpolicy network plugin.

For example, the allow-to-red policy specifies all red pods in the same namespace as the NetworkPolicy object allow traffic from any pods in any namespace.

Policy applied to project
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
  name: allow-to-red
spec:
  podSelector:
    matchLabels:
      type: red
  ingress:
  - {}

See Managing Networking for more information.

Cluster IP Range Now More Flexible

Cluster IP ranges are now more flexible by allowing multiple subnets for hosts. This provides the capability to allocate multiple, smaller IP address ranges for the cluster. This makes it easier to migrate from one allocated IP range to another.

There are multiple comma-delimited CIDRs in the configuration file. Each node is allocated only a single subnet from within any of the available ranges. You can not allocate different-sized host subnets, or use this to change the host subnet size. The clusterNetworkCIDRs can be different sizes, but must be equal to or larger than the host subnet size. It is not allowed to have some nodes use subnets that are not part of the clusterNetworkCIDRs. Nodes can allocate different-sized subnets by setting different hostSubnetLength values.

In regard to migration or edits, networks can be added to the list, CIDRs in the list may be re-ordered, and a CIDR can be removed from the list when there are no nodes that have an SDN allocation from that CIDR.

Example:

networkConfig:
  clusterNetworkCIDR: 10.128.0.0/24
  clusterNetworks:
  - cidr: 11.128.0.0/24
    hostSubnetLength: 6
  - cidr: 12.128.0.0/24
    hostSubnetLength: 6
  - cidr: 13.128.0.0/24
    hostSubnetLength: 4
  externalIPNetworkCIDRs:
  - 0.0.0.0/0
  hostSubnetLength: 6

The HAProxy router can look for a cookie in a client request. Based on that cookie name and value, always route requests that have that cookie to the same pod instead of relying upon the client source IP, which can be obscured by an F5 doing load balancing.

A cookie with a unique name is used to handle session persistence.

  1. Set a per-route configuration to set the cookie name used for the session.

  2. Add an env to set a router-wide default.

  3. Ensure that the cookie is set and honored by the router to control access.

Example scenario:

  1. Set a default cookie name for the HAProxy router:

    $ oc env dc/router ROUTER_COOKIE_NAME=default-cookie
  2. Log in as a normal user and create the project/pod/svc/route:

    $ oc login user1
    $ oc new-project project1
    $ oc create -f https://example.com/myhttpd.json
    $ oc create -f https://example.com/service_unsecure.json
    $ oc expose service service-unsecure
  3. Access the route:

    $ curl $route -v

    The HTTP response will contain the cookie name. For example:

    Set-Cookie: default_cookie=[a-z0-9]+
  4. Modify the cookie name using route annotation:

    $ oc annotate route service-unsecure router.openshift.io/cookie_name="route-cookie"
  5. Re-access the route:

    $ curl $route -v

    The HTTP response will contain the new cookie name:

    Set-Cookie: route-cookie=[a-z0-9]+

See Route-specific Annotations for more information.

HSTS Policy Support

HTTP Strict Transport Security (HSTS) ensures all communication between the server and client is encrypted and that all sent and received responses are delivered to and received from the authenticated server.

An HSTS policy is provided to the client via an HTTPS header (HSTS headers over HTTP are ignored) using an haproxy.router.openshift.io/hsts_header annotation to the route. When the Strict-Transport-Security response in the header is received by a client, it observes the policy until it is updated by another response from the host, or it times-out (max-age=0).

Example using reencrypt route:

  1. Create the pod/svc/route:

    $ oc create -f https://example.com/test.yaml
  2. Set the Strict-Transport-Security header:

    $ oc annotate route serving-cert haproxy.router.openshift.io/hsts_header="max-age=300;includeSubDomains;preload"
  3. Access the route using https:

    $ curl --head https://$route -k
    
       ...
       Strict-Transport-Security: max-age=300;includeSubDomains;preload
       ...

Enabling Static IPs for External Project Traffic (Technology Preview)

As a cluster administrator, you can assign specific, static IP addresses to projects, so that traffic is externally easily recognizable. This is different from the default egress router, which is used to send traffic to specific destinations.

Recognizable IP traffic increases cluster security by ensuring the origin is visible. Once enabled, all outgoing external connections from the specified project will share the same, fixed source IP, meaning that any external resources can recognize the traffic.

Unlike the egress router, this is subject to EgressNetworkPolicy firewall rules.

See Managing Networking for more information.

Re-encrypt Routes are Now Supported for Exposing the OpenShift Container Platform Registry

You can gain external access to the OpenShift Container Platform registry using a re-encrypt route. This allows you to log in to the registry from outside the cluster using the route address and to tag and push images using the route host.

Master

Public Pull URL Provided for Images

A public pull URL is provided for images versus being able to know the internal in-cluster IP or DNS of the service.

A new API field for the image stream with the public URL of the image was added, and a public URL is configured in the master-config.yaml file. The web console will understand this new field and generate the public pull specifications automatically to users (so users can just copy and paste the pull URL).

Example:

  1. Check the internalRegistryHostname setting in the master-config.yaml file:

      ...
      imagePolicyConfig:
        internalRegistryHostname: docker-registry.default.svc:5000
      ...
  2. Delete the OPENSHIFT_DEFAULT_REGISTRY variable in both:

    /etc/sysconfig/atomic-openshift-master-api
    /etc/sysconfig/atomic-openshift-master-controllers
  3. Start a build and check the push URL. It should push the new build image with internalRegistryHostname to the docker-registry.

Custom Resource Definitions

A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind (for example, pod objects). A custom resource definition is a built-in API that enables the ability to plug in your own custom, managed object and application as if it were native to Kubernetes. Therefore, you can leverage Kubernetes cluster management, RBAC and authentication services, PI services, CLI, security, and so on, without having to know Kubernetes internals or modifying Kubernetes itself in any way.

Custom Resource Definitions (CRD) deprecates Third Party Resources in Kubernetes 1.7.

How it works:

  1. Define a CRD class (your custom objects) and register the new resource type. This defines how it fits into the hierarchy and how it will be referenced from the CLI and API.

  2. Define a function to create a custom client, which is aware of the new resource schema.

  3. Once completed, it can be accessed from the CLI. However, in order to build controllers or custom functionality, you need API access to the objects, and so you need to build a set of CRUD functions (library) to access the objects and the event-driven listener for controllers.

  4. Create a client that:

    • Connects to the Kubernetes cluster.

    • Creates the new CRD (if it does not exist).

    • Creates a new custom client.

    • Creates a new test object using the client library.

    • Creates a controller that listens to events associated with new resources.

API Aggregation

There is now Kubernetes documentation on how API aggregation works in OpenShift Container Platform 3.7 and how other users can add third-party APIs:

Master Prometheus Endpoint Coverage

Prometheus endpoint logic was added to upstream components so that monitoring and health indicators can be added around deployment configurations.

Installation

Migrate etcd Before OpenShift Container Platform 3.7 Upgrade

Starting in OpenShift Container Platform 3.7, the use of the etcd3 v3 data model is required.

OpenShift Container Platform gains performance improvements with the v3 data model. In order to upgrade the data model, an embedded etcd configuration option in no longer allowed. Embedded is not co-located. Migration scripts will convert the v3 data model and allow you to move an embedded etcd to an external etcd either on the same host or a different host than the masters. In addition, there is a new scale up ability for etcd clusters.

See Migrating Embedded etcd to External etcd for more information.

Modular Installer to Allow Playbooks to Run Independently

The installer has been enhanced to allow administrators to install specific components. By breaking up the roles and playbooks, there is better targeting of ad hoc administration tasks.

New Installation Experience Around Phases

When you run the installer, OpenShift Container Platform now reports back at the end what phases you have gone through.

If the installation fails during a phase, you will be notified on the screen along with the errors from the Ansible run. Once you resolve the issue, rather than run the entire installation over again, you can pick up from the failed phase. This results in an increased level of control during installations and results in time savings.

Increased Control Over Image Stream and Templates

With OpenShift Container Platform 3.7, there is added control over whether or not your cluster automatically upgrades all the content provided during cluster upgrades.

Edit the openshift_install_examples variable in the hosted file or set it as a variable in the installer.

RPM = /etc/origin/examples /etc/origin/hosted
Container = /usr/share/openshift/examples /usr/share/openshift/hosted

openshift_install_examples=false

Setting openshift_install_examples to false will cause the installer to not upgrade the imagestream and templates. True is the default behavior.

Installation and Configuration of CFME 4.6 from the OpenShift Installer

Red Hat CloudForms Management Engine (CFME) 4.6 is now fully supported running on OpenShift Container Platform 3.7 as a set of containers.

CFME 4.6 is not yet released. Until it is available, this role is limited to installing ManageIQ (MIQ), the open source project that CFME is based on. The following is provided mainly for informational purposes. The OpenShift Container Platform 3.7 documentation will be updated with more complete instructions on deploying CFME 4.6 after it has been released.

CFME is an available API endpoint on all OpenShift Container Platform clusters that choose to use it. More cluster administrators are now able to leverage CFME and begin experiencing the insight and automations available to them in OpenShift Container Platform.

To install CFME 4.6:

# ansible-playbook -v -i <YOUR_INVENTORY> \
    playbooks/byo/openshift-management/config.yml

There is a known issue with this playbook.

To configure CFME 4.6 to consume the OpenShift Container Platform installation it is running on:

# ansible-playbook -v -i <YOUR_INVENTORY> \
    playbooks/byo/openshift-management/add_container_provider.yml

You can also automate the configuration of the provider to point to multiple OpenShift clusters:

# ansible-playbook -v -e container_providers_config=/tmp/cp.yml \
    playbooks/byo/openshift-management/add_many_container_providers.yml

The /tmp/cp.yml file requires some manual configurations to create and use it correctly.

See Multiple Container Providers for more information.

Diagnostics

Additional Health Checks

More health checks are now available for administrators to run after installations and upgrades. Administrators need the ability to run tests periodically to help determine the health of the framework components within the cluster. OpenShift Container Platform 3.7 offers test functionality via Ansible playbooks that can be run and output can be sent as file-based output.

$ ansible-playbook playbooks/byo/openshift-checks/adhoc.yml
                curator
                diagnostics
                disk_availability
                docker_image_availability
                docker_storage
                elasticsearch
                etcd_imagedata_size
                etcd_traffic
                etcd_volume
                fluentd
                fluentd_config
                kibana
                logging
                logging_index_time
                memory_availability
                ovs_version
                package_availability
                package_update
                package_version

$ ansible-playbook playbooks/byo/openshift-checks/adhoc.yml -e
openshift_checks=fluentd_config,logging_index_time,docker_storage

Alternatively, they are included in the health playbook:

$ ansible-playbook playbooks/byo/openshift-checks/health.yml

To capture the output:

$ ansible-playbook playbooks/byo/openshift-checks/health.yml -e
openshift_checks_output_dir=/tmp/checks

Metrics and Logging

Docker Events and API Calls Aggregated to EFK as Logs

Fluentd captures standard error and standard out from the running containers on the node. With this change, fluentd collects all the errors and events coming from the docker daemon running on the node and sends it to Elasticsearch (ES).

Enable this via the OpenShift Container Platform installer:

openshift_logging_fluentd_audit_container_engine=true

The collected information is in operation indices of ES and only cluster administrators have visual access. The event message includes action, pod name, image name, and user time-stamp.

Master Events are Aggregated to EFK as Logs

The eventrouter pod scrapes the events from kubernetes API and and outputs to STDOUT. The fluentd plug-in transforms the log message and sends it to Elasticsearch (ES).

Enable openshift_logging_install_eventrouter by setting it to true. It is off by default. Eventrouter is deployed to the default namespace. Collected information is in operation indices of ES and only cluster administrators have visual access.

See the design documentation for more information.

Kibana Dashboards for Operations Are Now Shareable

This allows OpenShift Container Platform administrators the ability to share saved Kibana searches, visualizations, and dashboards.

When openshift_logging_elasticsearch_kibana_index_mode is set to shared_ops, one admin user can create queries and visualizations for other admin users. Other users can not see those same queries and visualizations.

When openshift_logging_elasticsearch_kibana_index_mode is set to unique, users can only see saved queries and visualizations they created. This is the default behavior.

See Aggregating Container Logs for more information.

Removed ES_Copy Method for Sending Logs to External ES

ES_Copy was replaced with the secure_formard plug-in for fluentd to send logs from fluentd to external fluentd (that can then ingest into ES). ES_COPY is removed from the installer and the documentation.

When openshift_installer is run for logging to upgrade to 3.7, the installer now checks for ES_COPY in the inventory and fails the upgrade with:

msg: The ES_COPY feature is no longer supported. Remove the variable from your inventory

See Aggregating Container Logs for more information.

Expose Elasticsearch as a Route

By default, Elasticsearch (ES) deployed with OpenShift aggregated logging is not accessible from outside the logging cluster. This enables a route for external access to ES for those tools that want to access its data.

You now have direct access to ES using only your OpenShift token and have the ability to provide the external ES and ES Ops hostnames when creating the server certificate (similar to Kibana). Ansible tasks now simplify route deployment.

Removed Metrics and Logging Deployers

The metrics and logging deployers bare now replaced with playbook2image for oc cluster up so that openshift-ansible is used to install logging and metrics:

$ oc cluster up --logging --metrics

Check metrics and pod status:

$ oc get pod -n openshift-infra
$ oc get pod -n logging

Prometheus (Technology Preview)

OpenShift Container Platform operators deploy Prometheus (currently in Technology Preview and not for production workloads) on a OpenShift Container Platform cluster, collect Kubernetes and infrastructure metrics, and get alerts. Operators can see and query metrics and alerts on the Prometheus web dashboard, or bring their own Grafana and hook it up to Prometheus.

See Prometheus on OpenShift for more information.

Integrated Approach to Adding Hawkular OpenShift Agent (Tecnhology Preview)

Hawkular OpenShift Agent (HOSA) remains in Technology Preview and not for production workloads. It is packaged and can now be installed with the openshift_metrics_install_hawkular_agent option in the installer by setting it to true.

See Enabling Cluster Metrics for more information.

Developer Experience

AWS Service Broker

Users can seamlessly configure, deploy, and scale AWS services like Amazon RDS, Amazon Aurora, Amazon Athena, Amazon Route 53, and AWS Elastic Load Balancing directly within the OpenShift Container Platform console. For installation instructions, see AWS Service Broker and Getting Started Guide.

Template Instantiation API

Clients can now easily invoke a server API instead of relying on client logic.

Metrics

OpenShift Container Platform now includes:

  • Prometheus metrics that show you the health of builds in the system (number running, failing, failure reasons, and so on).

  • Timing information on build objects themselves to show how long they spent in various steps (not exposed as Prometheus metrics).

CLI Plug-ins (Technology Preview)

CLI plug-ins are currently in Technology Preview and not for production workloads.

Usually called plug-ins or binary extensions, this feature allows you to extend the default set of oc commands available and, therefore, allows you to perform new tasks.

See Extending the CLI for information on how to install and write extensions for the CLI.

Chaining Builds

In OpenShift Container Platform 3.7, Chaining Builds is a better approach for producing runtime-only application images, and fully replaces the Extended Builds feature.

Benefits of Chaining Builds include:

  • Supported by both Docker and Source-to-Image (S2I) build strategies, as well as combinations of the two, compared with S2i strategy only for Extended Builds.

  • No need to create and manage a new assemble-runtime script.

  • Easy to layer application components into any thin runtime-specific image.

  • Can build the application artifacts image anywhere.

  • Better separation of concerns between the step that produces the application artifacts and the step that puts them into an application image.

Default Hard Eviction Thresholds

OpenShift Container Platform uses the following default configuration for eviction-hard.

...
kubeletArguments:
  eviction-hard:
  - memory.available<100Mi
  - nodefs.available<10%
  - nodefs.inodesFree<5%
  - imagefs.available<15%
...

See Handling Out of Resource Errors for more information.

Web Console

OpenShift Ansible Broker

In OpenShift Container Platform 3.7, Open Service Broker API is implemented, enabling users to leverage Ansible for provisioning and managing services from the Service Catalog. This is a standardized approach for delivering simple to complex multi-container OpenShift services via Ansible. It works in conjunction with Ansible Playbook Bundle (APB) for lightweight application definition. APBs can be used to deliver and orchestrate on-platform services, but could also be used to provision and orchestrate off-platform services (from cloud providers, IaaS, and so on).

OpenShift Ansible Broker supports production workloads and multiple service plans. There is now secure connectivity between Service Catalog and Service Broker.

You can interact with the Service Catalog to provision and manage services while the details of the broker remain largely hidden.

Ansible Playbook Bundles

Ansible Playbook Bundles (APBs) are short-lived, lightweight container image consisting of:

  • a simple directory structure with named action playbooks.

  • metadata (required and optional parameters, as well as dependencies).

  • an Ansible runtime environment.

Developer tooling is included, providing a guided approach to APB creation. There is also support for the test playbook, allowing for functional testing of the service.) Two new APBs are introduced for MariaDB (SCL) and MySQL DB (SCL).

When a user provisions an application from the Service Catalog, the Ansible Service Broker will download the associated APB image from the registry and run it.

Developing APBs can be done in one of two ways: Creating the APB container image manually using standardized container creation tooling, or with APB tooling that Red Hat will deliver, which provides a guided approach to creation.

OpenShift Template Broker

The OpenShift Template Broker exposes templates through a Open Service Broker API to the Service Catalog.

The Template Broker matches the lifecycles of provision, deprovision, bind, and unbind with existing templates. No changes are required to templates, unless you expose bind. Your application will get injected with configuration details.

Initial Experience

OpenShift Container Platform 3.7 provides a better initial user experience with the Service Catalog. This includes:

  • A task-focused interface

  • Key call-outs

  • Unified search

  • Streamlined navigation

The new user interface is designed to really streamline the getting started process, in addition to incorporating the new Service Catalog items. It shows the existing content (for example, builder images and templates) as well as catalog items (if the catalog is enabled).

The new user experience can be enabled as a Technology Preview feature without the Service Catalog to be active. A cluster with this user interface (UI) would still be supported. Running the catalog UI without the Service Catalog enabled will work, but access to templates without the catalog will require a few extra steps.

Search Catalog

OpenShift Container Platform 3.7 provides a simple way to quickly get what you want The new Search Catalog user interface is designed to make it much easier to find items in a number of ways, making it even faster to find the items you are wanting to deploy.