$ oc create quota my-quota \ --hard=requests.storage=10Gi,persistentvolumeclaims=50 $ oc describe quota Name: my-quota Namespace: default Resource Used Hard -------- ---- ---- persistentvolumeclaims 1 50 requests.storage 10Gi 2Gi
Red Hat OpenShift Container Platform provides developers and IT organizations with a cloud application platform for deploying new applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, Ruby, and PHP.
Built on Red Hat Enterprise Linux and Kubernetes, OpenShift Container Platform provides a secure and scalable multi-tenant operating system for today’s enterprise-class applications, while providing integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.
Red Hat OpenShift Container Platform version 3.4 (RHBA-2017:0066) is now available. This release is based on OpenShift Origin 1.4. New features, changes, bug fixes, and known issues that pertain to OpenShift Container Platform 3.4 are included in this topic.
For initial installations, see the Installing a Cluster topics in the Installation and Configuration documentation.
To upgrade to this release from a previous version, see the Upgrading a Cluster topics in the Installation and Configuration documentation.
This release adds improvements related to the following components and concepts.
OpenShift Container Platform 3.4 now supports both Kubernetes deployments objects (currently in Technology Preview) and the existing deployment configurations objects.
Like deployment configurations, Kubernetes deployments describe the desired state of a particular component of an application as a pod template. Kubernetes deployments create replica sets (an iteration of replication controllers), which orchestrate pod lifecycles.
See Kubernetes Deployments Support for more details on usage and support.
Pod disruption budgets (currently in Technology Preview) allow the specification of safety constraints on pods during operations. Users with cluster-admin privileges can use them to limit the number of pods that are down simultaneously for a given project.
PodDisruptionBudget
is an API object that specifies the minimum number or
percentage of replicas that must be up at a time. Setting these in projects can
be helpful during node maintenance (such as scaling a cluster down or a cluster
upgrade) and is only honored on voluntary evictions (not on node failures).
See Managing Pods for more details.
The default pods per core for a node is now set to 10. Machines with less than
10 cores now have a smaller maximum pod capacity than previously configured.
Administrators can change this setting by modifying the pods-per-core
value.
See the
Setting
Maximum Pods Per Node section of the Administration Guide for more information.
With dynamic storage provisioning enabled, cluster administrators needed to be
able to set quota on the amount of storage a project can request. Cluster
administrators can now do so by setting the requests.storage
value for user’s
projects:
$ oc create quota my-quota \ --hard=requests.storage=10Gi,persistentvolumeclaims=50 $ oc describe quota Name: my-quota Namespace: default Resource Used Hard -------- ---- ---- persistentvolumeclaims 1 50 requests.storage 10Gi 2Gi
See Setting Quotas for more details.
Cluster administrators could previously configure nodes' pod eviction policy based on available memory. With this release, eviction policy can now also be configured based on available disk.
Nodes supports two file system partitions when detecting disk pressure:
The nodefs
file system that the node uses for local disk volumes, daemon logs,
etc. (for example, the file system that provides /
)
The imagefs
file system that the container runtime uses for storing images and
individual container writable layers
When configured, the node can report disk threshold violations, and the scheduler no longer tries to put pods on those nodes. The node ranks pods and then evicts pods to free up disk space.
See Handling Out of Resource Errors for more details.
Dynamic provisioning of persistent storage volumes for many storage providers was previously available in OpenShift Container Platform as a Technology Preview feature, but this release brings this feature into full support using the new storage classes implementation for the following:
OpenStack Cinder
AWS Elastic Block Store (EBS)
GCE Persistent Disk (gcePD)
GlusterFS
Ceph RBD
See Dynamic Provisioning and Creating Storage Classes for more details.
Users can now more easily integrate with the OpenShift Container Platform-provided OAuth server from their own applications deployed within their project. You can now use service accounts as a scope-constrained OAuth client.
See Service Accounts as OAuth Clients for more details.
Users can now use wildcard routes to determine the destination of all traffic
for a domain and its subdomains. For example, *.foo.com
can be routed to the
same back-end service, which is configured to handle all the subdomains.
You can specify that a route allows wildcard support through an annotation, and
the HAProxy router exposes the route to the service per the route’s wildcard
policy. The most-specific path wins; for example, bar.foo.com
is matched
before foo.com
.
See Creating Routes Specifying a Wildcard Subdomain Policy and Using Wildcard Routes (for a Subdomain) for more details.
This release includes a number of enhancements to improve the OpenShift Container Platform upgrade process from 3.3 to 3.4, including:
A --tags pre_upgrade
Ansible option for running a dry-run that performs all
pre-upgrade checks without actually upgrading any hosts and reports any problems
found.
New playbooks broken up into smaller steps when possible, allowing you to upgrade the control plane and nodes in separate phases.
Customizable node upgrades by specific label or number of hosts.
New atomic-openshift-excluder and atomic-openshift-docker-excluder packages that help ensure your systems stay locked down on the correct versions of OpenShift Container Platform and Docker when you are not trying to upgrade, according to the OpenShift Container Platform version. Usage is documented in relevant installation and upgrade steps.
A new image layout view has been added to the OpenShift Container Platform web console, providing additional information about specific images in the OpenShift Container Platform registry by clicking on their tags from the Builds → Images page.
You can now use external docker distribution servers that support images with more than two path segments. For example:
exampleregistry.net/project/subheading/image:tag
OpenShift Container Platform, however, is still limited to images of the form
$namespace/$name
, and cannot create multi-segment images.
OpenShift Pipelines, introduced in OpenShift Container Platform 3.3 as a Technology Preview feature, are now fully supported. OpenShift Pipelines are based on the Jenkins Pipeline plug-in. By integrating Jenkins Pipelines, you can now leverage the full power and flexibility of the Jenkins ecosystem while managing your workflow from within OpenShift Container Platform.
See the following for more on pipelines:
OpenShift Container Platform users using integrated Jenkins CI and CD pipelines can now leverage Jenkins 2.0 with improved usability and other enhancements.
Users who deploy an OpenShift Container Platform integrated Jenkins server can now configure it to allow automatic logins from the web console based on an OAuth flow with the master instead of requiring the standard Jenkins authentication credentials.
See OpenShift Container Platform OAuth Authentication for configuration details.
Cluster administrators can now designate nodes to be used for builds (i.e., Source-to-Image and/or Docker builds) so that build nodes can be scaled independently from the application container nodes. Build nodes can also be configured differently in terms of security settings, storage back ends, and other options.
See
Configuring Global Build Defaults and Overrides for details on setting nodeSelector
to
label build nodes, and
Assigning Builds to Specific Nodes for details on configuring a build to target a
specific node.