OpenShift Enterprise by Red Hat is a Platform as a Service (PaaS) that provides developers and IT organizations with a cloud application platform for deploying new applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Enterprise supports a wide selection of programming languages and frameworks, such as Java, Ruby, and PHP.
Built on Red Hat Enterprise Linux and Kubernetes, OpenShift Enterprise provides a secure and scalable multi-tenant operating system for today’s enterprise-class applications, while providing integrated application runtimes and libraries. OpenShift Enterprise brings the OpenShift PaaS platform to customer data centers, enabling organizations to implement a private PaaS that meets security, privacy, compliance, and governance requirements.
OpenShift Enterprise version 3.2 is now available. Ensure that you follow the instructions on upgrading your OpenShift cluster properly, including steps specific to this release.
For any release, always review the Installation and Configuration guide for instructions on upgrading your OpenShift cluster properly, including any additional steps that may be required for a specific release. |
Kubernetes has been updated to v1.2.0-36.
etcd has been updated to v2.2.5.
OpenShift Enterprise 3.2 requires Docker 1.9.1.
Admission controllers are now included, which intercept requests to the master API prior to persistence of a resource, but after the request is authenticated and authorized. The following admission control plug-ins can be configured:
The number of projects an individual user can create can be limited via the
ProjectRequestLimit
admission controller. See
Limiting
Number of Self-Provisioned Projects Per User for details.
A build defaults admission controller can be used to set default environment variables on all builds created, including global proxy settings. See Configuring Global Build Defaults and Overrides for details.
The PodNodeConstraints
admission control plug-in has been added, which
constrains the use of the NodeName
field in a pod definition to roles which
have the pods/binding
permission. This allows administrators, via
NodeSelectorLabelBlacklist
, to specify node labels by setting them in the
NodeSelector
field of the pod definition. See
Controlling Pod
Placement for details.
Multiple web login providers can now be configured at the same time.
The oc adm diagnostics
command can now launch a diagnostic pod that reports on
more potential issues with pod networking, DNS configuration, and registry
authentication.
Support for security context constraints (SCCs) has been added to the oc
describe
command.
The NO_PROXY
environment variable will now accept a CIDR in a number of
places in the code for controlling which IP ranges bypass the default HTTP proxy
settings. See
Configuring
Hosts for Proxies for details.
Masters are now taken down one at a time during upgrades through rolling
restarts. The openshift_rolling_restart_mode
parameter can now be used in
Ansible inventories to control this behavior: services
for service restarts or
system
for full system restarts. See
Configuring
Cluster Variables for details.
The new Volumes
field in SCCs allows an administrator full control over
which volume plug-ins may be specified.
In order to maintain backwards compatibility, the AllowHostDirVolumePlugin
field takes precedence over the Volumes
field for the host mounts. You may
use *
to allow all volumes.
By default, regular users are now forbidden from directly mounting any of the remote volume type; they must use a persistent volume claim (PVC).
The new ReadOnlyRootFilesystem
field in SCCs allows an administrator to
force containers to run with a read-only root file system.
If set to true, containers are required to run with a read-only root file system
by their SecurityContext
. Containers that do not set this value to true will
be defaulted. Containers that explicitly set this value to false will be
rejected.
If set to false, containers may use a read-only root file system, but they are not forced to run with one.
By default, the restricted and anyuid SCCs drop Linux capabilities that could be used to escalate container privileges. Administrators can change the list of default or enforced capabilities.
A constant-time string comparison is now used on webhooks.
Only users authenticated via OAuth can request projects.
A GitLab server can now be used as an identity provider. See Configuring Authentication for details.
The SETUID
and SETGID
capabilities have been added back to the anyuid SCC,
which ensures that programs that start as root and then drop to a lower
permission level will work by default.
Quota support has been added for emptydir
. When the quota is enabled on an
XFS system, nodes will limit the amount of space any given project can use on a
node to a fixed upper bound. The quota is tied to the FSGroup
of the
project. Administrators can control this value by editing the project directly
or allowing users to set FSGroup
via SCCs.
The DaemonSet
object is now limited to cluster administrators because pods
running under a DaemonSet
are considered to have higher priority than
regular pods, and for regular users on the cluster this could be a security
issue.
Administrators can prevent clients from accessing the API by their User-Agent
header the new userAgentMatching
configuration setting.
A readiness probe and health check have been added to the integrated registry to ensure new instances do not serve traffic until they are fully initialized.
You can limit the frequency of router reloads using the --interval=DURATION
flag or RELOAD_INTERVAL
environment variable to the router. This can
minimize the memory and CPU used by the router while reloading, at the cost of
delaying when the route is exposed via the router.
Routers now report back status to the master about whether routes are accepted, rejected, or conflict with other users. The CLI will now display that error information, allowing users to know that the route is not being served.
Using router sharding, you can specify a selection criteria for either namespaces (projects) or labels on routes. This enables you to select the routes a router would expose, and you can use this functionality to distribute routes across a set of routers, or shards.
The NoDiskConflicts
scheduling predicate can be added to the scheduler
configuration to ensure that pods using the same Ceph RBD device are not placed
on the same node. See Scheduler for details.
The administrative commands are now exposed via oc adm
so you have access to
them in a client context. The oadm
commands will still work, but will be a
symlink to the openshift
binary.
The help output of the oadm policy
command has been improved.
Service accounts are now supported for the router and registry:
The router can now be created without specifying --credentials
and it will use
the router service account in the current project.
The registry will also use a service account if --credentials
is not
provided. Otherwise, it will set the values from the --credentials
file as
environment on the generated deployment configuration.
Administrators can pass the --all-namespaces
flag to oc status
to see status
information across all namespaces and projects.
Users can now be presented with a customized, branded page before continuing on to a login identity provider. This allows users to see your branding up front instead of immediately redirecting to identity providers like GitHub and Google. See Customizing the Login Page for details.
CLI download URLs and documentation URLs are now customizable through web console extensions. See Adding or Changing Links to Download the CLI for details.
The web console uses a brand new theme that changes the look and feel of the navigation, tabs, and other page elements. See Project Overviews for details.
A new About page provides developers with information about the product
version, oc
CLI download locations, and a quick access to their current token
to login using oc login
. See
CLI
Downloads for details.
You can now add or edit resource constraints for your containers during Add to Project or later from the deployment configuration.
A form-based editor for build configurations has been added for modifying commonly edited fields directly from the web console.