$ cat /etc/origin/node/node-config.yaml enable-cri: - 'true'
Red Hat OpenShift Container Platform provides developers and IT organizations with a cloud application platform for deploying new applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, Ruby, and PHP.
Built on Red Hat Enterprise Linux and Kubernetes, OpenShift Container Platform provides a secure and scalable multi-tenant operating system for today’s enterprise-class applications, while providing integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.
Red Hat OpenShift Container Platform version 3.6 (RHBA-2017:2847) is now available. This release is based on OpenShift Origin 3.6. New features, changes, bug fixes, and known issues that pertain to OpenShift Container Platform 3.6 are included in this topic.
OpenShift Container Platform 3.6 is supported on RHEL 7.3 and newer with the latest packages from Extras, including Docker 1.12.
TLSV1.2 is the only supported security version in OpenShift Container Platform version 3.4 and later. You must update if you are using TLSV1.0 or TLSV1.1.
For initial installations, see the Installing a Cluster topics in the Installation and Configuration documentation.
To upgrade to this release from a previous version, see the Upgrading a Cluster topics in the Installation and Configuration documentation.
This release adds improvements related to the following components and concepts.
Many core features announced in March for Kubernetes 1.6 were the result of OpenShift Container Platform engineering. Red Hat continues to influence the product in the areas of storage, networking, resource management, authentication and authorization, multi-tenancy, security, service deployments and templating, and controller functionality.
OpenShift Container Platform now uses the CRI interface for kublet-to-Docker interaction.
As the container space matures and choices become more available, OpenShift Container Platform needs an agnostic interface in Kubernetes for container runtime interactions. OpenShift Container Platform 3.6 switches the default configuration to use the Kubernetes Docker CRI interface.
There is a enable-cri
setting in the node-config.yaml configuration file. A
value of true
enables the use of the interface. Change it by editing the
file and stopping or starting the atomic-openshift-node.service
.
$ cat /etc/origin/node/node-config.yaml enable-cri: - 'true'
Although the Docker CRI is stable and the default, the overall CRI interface in Kubernetes is still under development. Red Hat does not support crio, rkt, or frakti in this OpenShift Container Platform 3.6 release. |
Just like a disk drive, a cluster can become fragmented over time. When you ask the cluster how much space is left, the addition of all the free space does not indicate how many actual workloads can run. For example, it might say there is 10 GB left, but it could be that no single node can take more than 512 MB.
OpenShift Container Platform 3.6 introduces a new container that you can launch as a command line or a job. The container allows you to supply a popular workload (image) with a commonly requested CPU and MEM limit and request. The logs from the container will tell you how many of that workload can be deployed.
See Analyzing Cluster Capacity for more information.
You can now control what classes of storage projects are allowed to access, how much (total size) of that class, as well as how many claims.
This feature leverages the ResourceQuota
object and allows you to call out
storage classes by name for size and claim settings.
$ oc create quota my-quota-1 --hard=slow.storageclass.storage.k8s.io/requests.storage=20Gi,slow.storageclass.storage.k8s.io/persistentvolumeclaims=15 $ oc describe quota my-quota-1 Name: my-quota-1 Namespace: default Resource Used Hard -------- ---- --- slow.storageclass.storage.k8s.io/persistentvolumeclaims 0 15 slow.storageclass.storage.k8s.io/requests.storage 0 20Gi
See Require Explicit Quota to Consume a Resource for more information.
In OpenShift Container Platform 3.6, administrators now have the ability to specify a
separate quota for persistent volume claims (PVCs) and requests.storage
per
storage class.
See Setting Quotas for more information.
When you mount a memory backed volume into a container, it leverages a
directory. Now, you can place all sources of the configuration for your
application (configMaps
, secrets, and downward API) into the same directory
path.
The new projected line in the volume definition allows you to tell multiple volumes to leverage the same mount point while guarding for path collisions.
volumes: - name: all-in-one projected: sources: - secret: name: test-secret items: - key: data-1 path: mysecret/my-username - key: data-2 path: mysecret/my-passwd - downwardAPI: items: - path: mydapi/labels fieldRef: fieldPath: metadata.labels - path: mydapi/name fieldRef: fieldPath: metadata.name - path: mydapi/cpu_limit resourceFieldRef: containerName: allinone-normal resource: limits.cpu divisor: "1m" - configMap: name: special-config items: - key: special.how path: myconfigmap/shared-config - key: special.type path: myconfigmap/private-config
You run init containers in the same pod as your application container to create the environment your application requires or to satisfy any preconditions the application might have. You can run utilities that you would otherwise need to place into your application image. You can run them in different file system namespaces (view of the same file system) and offer them different secrets than your application container.
Init containers run to completion and each container must finish before the next
one starts. The init containers will honor the restart policy. Leverage
initContainers
in the podspec
.
$ cat init-containers.yaml apiVersion: v1 kind: Pod metadata: name: init-loop spec: containers: - name: hello-openshift image: openshift/hello-openshift ports: - containerPort: 80 volumeMounts: - name: workdir mountPath: /usr/share/nginx/html initContainers: - name: init image: centos:centos7 command: - /bin/bash - "-c" - "while :; do sleep 2; echo hello init container; done" volumes: - name: workdir emptyDir: {}
$ oc get -f init-containers.yaml NAME READY STATUS RESTARTS AGE hello-openshift 0/1 Init:0/1 0 6m
Kubernetes now supports extending the default scheduler implementation with custom schedulers.
After
configuring
and deploying your new scheduler, you can call it by name from the podspec
via schedulerName
. These new schedulers are packaged into container images and
run as pods inside the cluster.
$ cat pod-custom-scheduler.yaml apiVersion: v1 kind: Pod metadata: name: custom-scheduler spec: schedulerName: custom-scheduler containers: - name: hello image: docker.io/ocpqe/hello-pod
See Scheduling for more information.
Instead of individually declaring environment variables in a pod definition, a
configMap
can be imported and all of its content can be dynamically turned
into environment variables.
In the pod specification, leverage the envFrom
object and reference the
desired configMap
:
env: - name: duplicate_key value: FROM_ENV - name: expansion value: $(REPLACE_ME) envFrom: - configMapRef: name: env-config
See ConfigMaps
for more
information.
Control which nodes your workload will land on in a more generic and powerful
way as compared to nodeSelector
.
NodeSelectors
provide a powerful way for a user to specify which node a
workload should land on. However, If the selectors are not available or are
conflicted, the workload will not be scheduled at all. They also require a user
to have specific knowledge of node label keys and values. Operators provide a
more flexible way to select nodes during scheduling.
Now, you can
select
the label value you would like the operator to compare against (for example,
In
, NotIn
, Exists
, DoesNotExist
, Gt
, and Lt
). You can choose to
make satisfying the operator required or preferred. Preferred means search for
the match, but, if you can not find one, ignore it.
affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "failure-domain.beta.kubernetes.io/zone" operator: In values: ["us-central1-a"]
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "failure-domain.beta.kubernetes.io/zone" operator: NotIn values: ["us-central1-a"]
See Advanced Scheduling and Node Affinity for more information.
Pod affinity and anti-affinity is helpful if you want to allow Kubernetes the freedom to select which zone an application lands in, but whichever it chooses you would like to make sure another component of that application lands in the same zone.
Another use case is if you have two application components that, due to security reasons, cannot be on the same physical box. However, you do not want to lock them into labels on nodes. You want them to land anywhere, but still honor anti-affinity.
Many of the same high-level concepts mentioned in the node affinity and
anti-affinity hold true here. For pods, you declare a
topologyKey
,
which will be used as the boundary object for the placement logic.
affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: service operator: In values: [“S1”] topologyKey: failure-domain.beta.kubernetes.io/zone affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: service operator: In values: [“S1”] topologyKey: kubernetes.io/hostname
See Advanced Scheduling and Pod Affinity and Anti-affinity for more information.
Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them.
A taint allows a node to refuse pod to be scheduled unless that pod has a matching toleration.
You apply taints to a node through the node specification (NodeSpec
) and apply
tolerations to a pod through the pod specification (PodSpec
). A taint on a
node instructs the node to repel all pods that do not tolerate the taint.
Taints and tolerations consist of a key, value, and effect. An operator allows you to leave one of these parameters empty.
In OpenShift Container Platform 3.6, daemon pods do respect taints and tolerations, but they
are created with NoExecute
tolerations for the
node.alpha.kubernetes.io/notReady
and node.alpha.kubernetes.io/unreachable
taints with no tolerationSeconds
. This ensures that when the
TaintBasedEvictions
alpha feature is enabled, they will not be evicted when
there are node problems such as a network partition. (When the
TaintBasedEvictions
feature is not enabled, they are also not evicted in these
scenarios, but due to hard-coded behavior of the NodeController
rather than
due to tolerations).
Set the taint from the command line:
$ oc taint nodes node1 key=value:NoSchedule
Set toleration in the PodSpec
:
tolerations: - key: "key" operator: "Equal" value: "value" effect: "NoSchedule"
This feature is currently in Technology Preview and not for production workloads.
OpenShift Container Platform has long offered easy integration between continuous integration
pipelines that create deployable Docker images and automatic redeployment and
rollout with DeploymentConfigs
. This makes it easy to define a standard
process for continuous deployment that keeps your application always running. As
new, higher level constructs like deployments and StatefulSets
have reached
maturity in Kubernetes, there was no easy way to leverage them and still
preserve automatic CI/CD.
In addition, the image stream concept in OpenShift Container Platform makes it easy to
centralize and manage images that may come from many different locations, but to
leverage those images in Kubernetes resources you had to provide the full
registry (an internal service IP), the namespace, and the tag of the image,
which meant that you did not get the ease of use that BuildConfigs
and
DeploymentConfigs
offer by allowing direct reference of an image stream tag.
Starting in OpenShift Container Platform 3.6, we aim to close that gap both by making it as
easy to trigger redeployment of Kubernetes Deployments and StatefulSets
, and
also by allowing Kubernetes resources to easily reference OpenShift Container Platform image
stream tags directly.
See Using Image Streams with Kubernetes Resources for more information.
When working with image signatures as the image-admin
role, you can now see
the status of the images in terms of their signatures.
You can now use the oc adm verify-image-signature
command to save or remove
signatures. The resulting oc describe istag
displays additional metadata about
the signature’s status.
$ oc describe istag origin-pod:latest Image Signatures: Name: sha256:c13060b74c0348577cbe07dedcdb698f7d893ea6f74847154e5ef3c8c9369b2c@f66d720cfaced1b33e8141a844e793be Type: atomic Status: Unverified # Verify the image and save the result back to image stream $ oc adm verify-image-signature sha256:c13060b74c0348577cbe07dedcdb698f7d893ea6f74847154e5ef3c8c9369b2c \ --expected-identity=172.30.204.70:5000/test/origin-pod:latest --save --as=system:admin sha256:c13060b74c0348577cbe07dedcdb698f7d893ea6f74847154e5ef3c8c9369b2c signature 0 is verified (signed by key: "172B61E538AAC0EE") # Check the image status $ oc describe istag origin-pod:latest Image Signatures: Name: sha256:c13060b74c0348577cbe07dedcdb698f7d893ea6f74847154e5ef3c8c9369b2c@f66d720cfaced1b33e8141a844e793be Type: atomic Status: Verified Issued By: 172B61E538AAC0EE Signature is Trusted (verified by user "system:admin" on 2017-04-28 12:32:25 +0200 CEST) Signature is ForImage ( on 2017-04-28 12:32:25 +0200 CEST)
See Image Signatures and Enabling Image Signature Support for more information.
There is now a programmable way to read and write signatures using only the docker registry API.
To read, you must be authenticated to the registry.
PUT /extensions/v2/{namespace}/{name}/signatures/{digest} $ curl http://<user>:<token>@<registry-endpoint>:5000/extensions/v2/<namespace>/<name>/signatures/sha256:<digest> JSON: { "version": 2, "type": "atomic", "name": "sha256:4028782c08eae4a8c9a28bf661c0a8d1c2fc8e19dbaae2b018b21011197e1484@cddeb7006d914716e2728000746a0b23", "content": "<base64 encoded signature>", }
To write, you must have the image-signer
role.
GET /extensions/v2/{namespace}/{name}/signatures/{digest} $ curl http://<user>:<token>@<registry-endpoint>:5000/extensions/v2/<namespace>/<name>/signatures/sha256:<digest> { "signatures": [ { "version": 2, "type": "atomic", "name": "sha256:4028782c08eae4a8c9a28bf661c0a8d1c2fc8e19dbaae2b018b21011197e1484@cddeb7006d914716e2728000746a0b23", "content": "<base64 encoded signature>", } ] }
This feature is currently in Technology Preview and not for production workloads.
If a resource is not managed by quota, a user has no restriction on the amount of resource that can be consumed. For example, if there is no quota on storage related to the gold storage class, the amount of gold storage a project can create is unbounded.
See Setting Quotas for more information.
The AWS EFS provisioner allows you to dynamically use the AWS EFS endpoint to get NFS remote persistent volumes on AWS.
It leverages the
external
dynamic provisioner interface. It is provided as a docker
image that you
configure with a configMap
and deploy on OpenShift Container Platform. Then, you can use a
storage class with the appropriate configuration.
apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: slow provisioner: foobar.io/aws-efs parameters: gidMin: "40000" gidMax: "50000"
gidMin
and gidMax
are the minimum and maximum values, respectively, of the
GID range for the storage class. A unique value (GID) in this range (gidMin
to
gidMax
) is used for dynamically provisioned volumes.
VMware vSphere storage allows you to dynamically use the VMware vSphere storage options ranging from VSANDatastore, ext3, vmdk, and VSAN while honoring vSphere Storage Policy (SPBM) mappings.
VMware vSphere storage leverages the cloud provider interface in Kubernetes to trigger this in-tree dynamic storage provisioner. Once the cloud provider has the correct credential information, tenants can leverage storage class to select the desired storage.
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: fast provisioner: kubernetes.io/vsphere-volume parameters: diskformat: zeroedthick
See Configuring for VMWare vSphere and Persistent Storage Using VMWare vSphere Volume for more information.
You can now use CHAP authentication for your iSCSI remote persistent volumes (PVs). Also, you can annotate your PVs to leverage any mount options that are supported by that underlying storage technology.
The tenant supplies the correct user name and password for the CHAP
authentication as a secret in their podspec
. For mount options, you supply the
annotation in the PV.
volumes: - name: iscsivol iscsi: targetPortal: 127.0.0.1 iqn: iqn.2015-02.example.com:test lun: 0 fsType: ext4 readOnly: true chapAuthDiscovery: true chapAuthSession: true secretRef: name: chap-secret
Set volume.beta.kubernetes.io/mount-options
to
volume.beta.kubernetes.io/mount-options: rw,nfsvers=4,noexec
.
See Mount Options for more information.
Mount Options are currently in Technology Preview and not for production workloads.
You can now specify mount options while mounting a persistent volume by using
the annotation volume.beta.kubernetes.io/mount-options
See Persistent Storage for more information.
Previously, only a few supported storage options existed for a scaled, highly-available integrated OpenShift Container Platform (OCP) registry. Automated container native storage (CNS) 3.6 and the OpenShift Container Platform installer now include an option to automatically deploy a scale-out registry based on highly available storage, out of the box. When enabled in the installer’s inventory file, CNS will be deployed on a desired set of nodes (for instance, infrastructure nodes). Then, the required underlying storage constructs will automatically be created and configured for use with the deployed registry. Moving an existing registry deployment from NFS to CNS is also supported, and requires additional steps for data migration.
Backing the OpenShift Container Platform registry with CNS enables users to take advantage of the globally available storage capacity, strong read/write consistency, three-way replica, and RHGS data management features.
The feature is provided through integrations in the OpenShift Container Platform advanced installation process. A few dedicated storage devices and a simple change to the inventory file is all that is required.
The OpenShift Commercial Evaluation subscription includes container native storage (CNS), container ready storage (CRS) solutions.
The OpenShift Commercial Evaluation subscription SKU bundles the CNS and CRS features, with additional entitlements to evaluate OpenShift Container Platform with CNS/CRS.
Evaluation SKUs are not bundled with OpenShift Container Platform’s SKUs or entitlements. Consult your Red Hat account representative for subscription guidance. |
See Recommended Host Practices for updated etcd performance guidance.
In OpenShift Container Platform 3.6 , the maximum number of nodes per cluster is 2000.
OpenShift Container Platform 3.6 introduces the ability to connect to multiple destinations
from a project without needing to reserve a separate source IP for each of them.
Also, there is now an optional fallback IP. Old syntax continues to behave the
same and there is no change to EGRESS_SOURCE
and EGRESS_GATEWAY
definitions.
Old way:
- name: EGRESS_DESTINATION value: 203.0.113.25
New way:
- name: EGRESS_DESTINATION value: | 80 tcp 1.2.3.4 8080 tcp 5.6.7.8 80 8443 tcp 9.10.11.12 443 13.14.15.16
localport udp|tcp dest-ip [dest-port]
See Managing Networking for more information.
TLS connections (certificate validations) do not easily work because the client needs to connect to the egress router’s IP (or name) rather than to the destination server’s IP/name. Now, the egress router can be run as a proxy rather than just redirecting packets.
How it works:
Create a new project and pod.
Create the egress-router-http-proxy
pod.
Create the service for egress-router-http-proxy
.
Set up http_proxy
in the pod:
# export http_proxy=http://my-egress-router-service-name:8080 # export https_proxy=http://my-egress-router-service-name:8080
Test and check squid headers in response:
$ curl -ILs http://www.redhat.com $ curl -ILs https://rover.redhat.com HTTP/1.1 403 Forbidden Via: 1.1 egress-http-proxy (squid/x.x.x) $ curl -ILs http://www.google.com HTTP/1.1 200 OK Via: 1.1 egress-http-proxy (squid/x.x.x) $ curl -ILs https://www.google.com HTTP/1.1 200 Connection established HTTP/1.1 200 OK
See Managing Networking for more information.
There are several benefits of using DNS names versus IP addresses:
It tracks DNS mapping changes.
Human-readable, easily remembered naming.
Potentially backed by multiple IP addresses.
How it works:
Create the project and pod.
Deploy egress network policy with DNS names.
Validate the firewall.
{ "kind": "EgressNetworkPolicy", "apiVersion": "v1", "metadata": { "name": "policy-test" }, "spec": { "egress": [ { "type": "Allow", "to": { "dnsName": "stopdisablingselinux.com" } }, { "type": "Deny", "to": { "cidrSelector": "0.0.0.0/0" } } ] } }
Exposing services by creating routes will ignore the Egress Network Policy.
Egress Network policy Service endpoint filtering is performed on the |
See Managing Pods for more information.
Network Policy (currently in Technology Preview and not for production workloads) is an optional plug-in specification of how selections of pods are allowed to communicate with each other and other network endpoints. It provides fine-grained network namespace isolation using labels and port specifications.
After installing the Network Policy plug-in, an annotation that flips the
namespace from allow all traffic
to deny all traffic
must first be set on
the namespace. At that point, NetworkPolicies
can be created that define what
traffic to allow. The annotation is as follows:
$ oc annotate namespace ${ns} 'net.beta.kubernetes.io/network-policy={"ingress":{"isolation":"DefaultDeny"}}'
The allow-to-red policy specifies "all red pods in namespace project-a
allow
traffic from any pods in any namespace." This does not apply to the red pod in
namespace project-b
because podSelector
only applies to the namespace in
which it was applied.
kind: NetworkPolicy apiVersion: extensions/v1beta1 metadata: name: allow-to-red spec: podSelector: matchLabels: type: red ingress: - {}
See Managing Networking for more information.
OpenShift Container Platform 3.6 introduces improved router customization documentation. Many RFEs could be solved with better documentation around the HAProxy features and functions which are now added, and their customizable fields via annotations and environment variables. For example, router annotations to do per-route operations.
For example, to change the behavior of HAProxy (round-robin load balancing) through annotating a route:
$ oc annotate route/ab haproxy.router.openshift.io/balance=roundrobin
For more information, see Deploying a Customized HAProxy Router.
With OpenShift Container Platform 3.6, there is now the added ability to use custom F5 partitions for properly securing and isolating OpenShift Container Platform route synchronization and configuration.
The default is still /Common
or global partition if not specified. Also,
behavior is unchanged if the partition path is not specified. This new feature
ensures all the referenced objects are in the same partition, including virtual
servers (http
or https
).
The router container is able to terminate IPv6 traffic and pass HTTP[S] through to the back-end pod.
The IPv6 interfaces on the router must be enabled, with IPv6 addresses listening
(::80
, ::443
). The client needs to reach the router node using IPv6.
IPv4 should be unaffected and continue to work, even if IPv6 is disabled.
HAProxy can only terminate IPv6 traffic when the router uses the network stack
of the host (default). When using the container network stack ( |
The Ansible service broker is currently in Technology Preview and not for production workloads. This feature includes:
Implementation of the open service broker API that enables users to leverage Ansible for provisioning and managing of services via the service catalog on OpenShift Container Platform.
Standardized approach for delivering simple to complex multi-container OpenShift Container Platform services.
Works in conjunction with Ansible playbook bundles (APB), which is a lightweight meta container comprised of a few named playbooks for each open service broker API operations.
Service catalog and Ansible service broker must be configured during OpenShift Container Platform installation. Once enabled, APB services can be deployed right from Service Catalog UI.
In OpenShift Container Platform In OCP 3.6.0, the Ansible Service Broker exposes an unprotected route, which allows unauthenticated users to provision resources in the cluster, namely Mediawiki and Postgres Ansible Playbook Bundles. |
See Configuring the Ansible Service Broker for more information.
Ansible playbook bundles (APB) (currently in Technology Preview and not for production workloads) is a short-lived, lightweight container image consisting of:
Simple directory structure with named action playbooks
Metadata consisting of:
required/optional parameters
dependencies (provision versus bind)
Ansible runtime environment
Leverages existing investment in Ansible playbooks and roles
Developer tooling available for guided approach
Easily modified or extended
Example APB services included with OpenShift Container Platform 3.6:
MediaWiki, PostgreSQL
When a user orders an application from the service catalog, the Ansible service broker will download the associated APB image from the registry and run it. Once the named operation has been performed on the service, the APB image will then terminate.
The installation of containerized CloudForms inside OpenShift Container Platform is now part of the main installer (currently in Technology Preview and not for production workloads). It is now treated like other common components (metrics, logging, and so on).
After the OpenShift Container Platform cluster is provisioned, there is an additional
playbook you can run to deploy CloudForms into the environment (using the
openshift_cfme_install_app
flag in the hosts file).
$ ansible-playbook -v -i <INVENTORY_FILE> playbooks/byo/openshift-cfme/config.yml
Requirements:
Type | Size | CPUs | Memory |
---|---|---|---|
Masters |
1+ |
8 |
12 GB |
Nodes |
2+ |
4 |
8 GB |
PV Storage |
25 GB |
N/A |
N/A |
NFS is the only storage option for the Postgres database at this time. The NFS server should be on the first master host. The persistent volume backing the NFS storage volume is mounted on exports. |
OpenShift Container Platform (OCP) 3.6 now includes an integrated and simplified installation of container native storage (CNS) through the advanced installer. The installer’s inventory file is simply configured. The end result is an automated, supportable, best practice installation of CNS, providing ready-to-use persistent storage with a pre-created storage class. The advanced installer now includes automated and integrated support for deployment of CNS, correctly configured and highly available out-of-the-box.
CNS storage device details are added to the installer’s inventory file. Examples provided in OpenShift Container Platform advanced installation documentation. The installer manages configuration and deployment of CNS, its dynamic provisioner, and other pertinent details.
This feature is currently in Technology Preview and not for production workloads.
RHEL System Containers offer more control over the life cycle of the services that do not run inside OpenShift Container Platform or Kubernetes. Additional system containers will be offered over time.
System Containers leverage the OSTree on RHEL or Atomic Host. They are controlled by the kernel init system and therefore can be leveraged earlier in the boot sequence. This feature is enabled in the installer configuration.
For more information, see Configuring System Containers.
This feature is currently in Technology Preview and not for production workloads.
To run the OpenShift Container Platform installer as a system container:
$ atomic install --system --set INVENTORY_FILE=$(pwd)/inventory registry:port/openshift3/ose-ansible:v3.6 $ systemctl start ose-ansible-v3.6
Starting with new installations of OpenShift Container Platform 3.6, the etcd3 v3 data model is the default. By moving to the etcd3 v3 data model, there is now:
Larger memory space to enable larger cluster sizes.
Increased stability in adding and removing nodes in general life cycle actions.
A significant performance boost.
A migration playbook will be provided in the near future allowing upgraded environments to migrate to the v3 data model.
You now have the ability to change the certificate expiration date en mass across the cluster for the various framework components that use TLS.
We offer new cluster variables per framework area so that you can use different
time-frames for different framework components. Once set, issue the new
redeploy-openshift-ca
playbook. This playbook only works for redeploying the
root CA certificate of OpenShift Container Platform. Once you set the following options, they
will be effective in a new installation, or they can be used when redeploying
certificates against an existing cluster.
# CA, node and master certificate expiry openshift_ca_cert_expire_days=1825 openshift_node_cert_expire_days=730 openshift_master_cert_expire_days=730 # Registry certificate expiry openshift_hosted_registry_cert_expire_days=730 # Etcd CA, peer, server and client certificate expiry etcd_ca_default_days=1825
OpenShift Container Platform engineering and the OpenShift Online operations teams have been working closely together to refactor and enhance the installer. The OpenShift Container Platform 3.6 release includes the culmination of those efforts, including:
Upgrading from OpenShift Container Platform 3.5 to 3.6
Idempotency refactoring of the configuration role
Swap handling during installation
All BYO playbooks pull from a normalized group source
A final port of operation’s Ansible modules
A refactoring of excluder roles
The metrics and logging deployers were replaced with playbook2image
for oc
cluster up
so that openshift-ansible
is used to install logging and metrics:
$ oc cluster up --logging --metrics
Check metrics and logging pod status:
$ oc get pod -n openshift-infra $ oc get pod -n logging
By default, the Elasticsearch instance deployed with OpenShift Container Platform aggregated logging is not accessible from outside the deployed OpenShift Container Platform cluster. You can now enable an external route for accessing the Elasticsearch instance via its native APIs to enable external access to data via various supported tools.
Direct access to the Elasticsearch instance is enabled using your OpenShift token. You have the ability to provide the external Elasticsearch and Elasticsearch Operations host names when creating the server certificate (similar to Kibana). The provided Ansible tasks simplify route deployment.
mux
is a new Technology Preview feature for
OpenShift Container Platform 3.6.0 designed to facilitate better scaling of aggregated
logging. It uses a smaller set of from Fluentd instances (called muxes) kept
near the Elasticsearch instance pod to improve the efficiency of indexing log
records into Elasticsearch.
See Aggregating Container Logs for more information.
This feature (currently in Technology Preview and not for production workloads) brings the Service Catalog experience to the CLI.
You can run oc cluster up --version=latest --service-catalog=true
to get the
Service Catalog experience in OpenShift Container Platform 3.6.
The template service broker (currently in Technology Preview) exposes OpenShift templates through a open service broker API to the Service Catalog.
The template service broker (TSB) matches the lifecycles of provision, deprovision, bind, unbind with existing templates. No changes are required to templates, unless you expose bind. Your application will get injected with configuration details (bind).
The TSB is currently a Technology Preview feature and should not be used in production clusters. Enabling the TSB currently requires opening unauthenticated access to the cluster; this security issue will be resolved before exiting the Technology Preview phase. |
See Configuring the Template Service Broker for more information.
Previously, only oc adm prune
could be used. Now, you can define how much
build history you want to keep per build configuration. Also, you can set
successful
versus failed
history limits separately.
See Advanced Build Operations for more information.
In OpenShift Container Platform 3.6, it is now easier to make images available as slave pod templates.
Slaves are defined as image-streams or image-stream tags with the appropriate
label. Slaves can also be specified via a ConfigMap
with the appropriate
label.
See Using the Jenkins Kubernetes Plug-in to Run Jobs for more information.
Builds now record timing information based on more granular steps.
Information such as how long it took to pull the base image, clone the source, build the source, and push the image are provided. For example:
$ oc describe build nodejs-ex-1 Name: nodejs-ex-1 Namespace: myproject Created: 2 minutes ago Status: Complete Started: Fri, 07 Jul 2017 17:49:37 EDT Duration: 2m23s FetchInputs: 2s CommitContainer: 6s Assemble: 36s PostCommit: 0s PushImage: 1m0s
OpenShift Container Platform uses the following default configuration for eviction-hard
.
...
kubeletArguments:
eviction-hard:
- memory.available<100Mi
- nodefs.available<10%
- nodefs.inodesFree<5%
- imagefs.available<15%
...
See Handling Out of Resource Errors for more information.
Webhook triggers for Github and Bitbucket.
HTTPD 2.4 s2i support.
Separate build events for start
, canceled
, success
, and fail
.
Support for arguments in Docker files.
Credential support for Jenkins Sync plug-in for ease of working external Jenkins instance.
ValueFrom
Support in build environment variables.
Deprecated Jenkins v1 image.
oc cluster up
: support launching service catalog
Switch to nip.io from xip.io, with improved stability
You can now opt into the service catalog (currently in Technology Preview and not for production workloads) during installation or upgrade.
When developing microservices-based applications to run on cloud native platforms, there are many ways to provision different resources and share their coordinates, credentials, and configuration, depending on the service provider and the platform.
To give developers a more seamless experience, OpenShift Container Platform includes a Service Catalog, an implementation of the open service broker API (OSB API) for Kubernetes. This allows users to connect any of their applications deployed in OpenShift Container Platform to a wide variety of service brokers.
The service catalog allows cluster administrators to integrate multiple platforms using a single API specification. The OpenShift Container Platform web console displays the service classes offered by brokers in the service catalog, allowing users to discover and instantiate those services for use with their applications.
As a result, service users benefit from ease and consistency of use across different types of services from different providers, while service providers benefit from having one integration point that gives them access to multiple platforms.
This feature consists of:
The Service Consumer: The individual, application , or service that uses a service enabled by the broker and catalog.
The Catalog: Where services are published for consumption.
Service Broker: Publishes services and intermediates service creation and credential configuration with a provider.
Service Provider: The technology delivering the service.
Open Service Broker API: Lists services, provisions and deprovisions, binds, and unbinds.
See Enabling the Service Catalog for more information.
In OpenShift Container Platform 3.6, a better initial user experience (currently in Technology Preview and not for production workloads) is introduced, motivated by service catalog. This includes:
A task-focused interface.
Key call-outs.
Unified search.
Streamlined navigation.
The search catalog feature (currently in Technology Preview and not for production workloads) provides a single, simple way to quickly get what you want.