After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including taking steps to prepare for users.
The OpenShift Container Platform control plane includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API.
As an administrator, you can configure OAuth to specify an identity provider after you install your cluster.
By default, only a kubeadmin
user exists on your cluster. To specify an
identity provider, you must create a custom resource (CR) that describes
that identity provider and add it to the cluster.
OpenShift Container Platform user names containing |
You can configure the following types of identity providers:
Identity provider | Description |
---|---|
Configure the |
|
Configure the |
|
Configure the |
|
Configure a |
|
Configure a |
|
Configure a |
|
Configure a |
|
Configure a |
|
Configure an |
After you define an identity provider, you can use RBAC to define and apply permissions.
The following parameters are common to all identity providers:
Parameter | Description |
---|---|
|
The provider name is prefixed to provider user names to form an identity name. |
|
Defines how new identities are mapped to users when they log in. Enter one of the following values:
|
When adding or changing identity providers, you can map identities from the new
provider to existing users by setting the mappingMethod parameter to
add .
|
The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider.
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: my_identity_provider (1)
mappingMethod: claim (2)
type: HTPasswd
htpasswd:
fileData:
name: htpass-secret (3)
1 | This provider name is prefixed to provider user names to form an identity name. |
2 | Controls how mappings are established between this provider’s
identities and User objects. |
3 | An existing secret containing a file generated using
htpasswd . |
Understand and apply role-based access control.
Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project.
Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects.
Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action.
Authorization is managed using:
Authorization object | Description |
---|---|
Rules |
Sets of permitted verbs on a set of objects. For example,
whether a user or service account can |
Roles |
Collections of rules. You can associate, or bind, users and groups to multiple roles. |
Bindings |
Associations between users and/or groups with a role. |
There are two levels of RBAC roles and bindings that control authorization:
RBAC level | Description |
---|---|
Cluster RBAC |
Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. |
Local RBAC |
Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. |
A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation.
This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles.
During evaluation, both the cluster role bindings and the local role bindings are used. For example:
Cluster-wide "allow" rules are checked.
Locally-bound "allow" rules are checked.
Deny by default.
OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally.
It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly. |
Default cluster role | Description |
---|---|
|
A project manager. If used in a local binding, an |
|
A user that can get basic information about projects and users. |
|
A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. |
|
A user that can get basic cluster status information. |
|
A user that can get or view most of the objects but cannot modify them. |
|
A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. |
|
A user that can create their own projects. |
|
A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. |
Be mindful of the difference between local and cluster bindings. For example,
if you bind the cluster-admin
role to a user by using a local role binding,
it might appear that this user has the privileges of a cluster administrator.
This is not the case. Binding the cluster-admin
to a user in a project
grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin
, plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin
.
The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below.
The |
OpenShift Container Platform evaluates authorization by using:
The user name and list of groups that the user belongs to.
The action you perform. In most cases, this consists of:
Project: The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities.
Verb : The action itself: get
, list
, create
, update
, delete
, deletecollection
, or watch
.
Resource name: The API endpoint that you access.
The full list of bindings, the associations between users or groups with a role.
OpenShift Container Platform evaluates authorization by using the following steps:
The identity and the project-scoped action is used to find all bindings that apply to the user or their groups.
Bindings are used to locate all the roles that apply.
Roles are used to find all the rules that apply.
The action is checked against each rule to find a match.
If no matching rule is found, the action is then denied by default.
Remember that users and groups can be associated with, or bound to, multiple roles at the same time. |
Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with.
The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin. Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level. |
The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation, where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources.
A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces.
Namespaces provide a unique scope for:
Named resources to avoid basic naming collisions.
Delegated management authority to trusted users.
The ability to limit community resource consumption.
Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users.
A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects.
Projects can have a separate name
, displayName
, and description
.
The mandatory name
is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters.
The optional displayName
is how the project is displayed in the web console (defaults to name
).
The optional description
can be a more detailed description of the project and is also visible in the web console.
Each project scopes its own set of:
Object | Description |
---|---|
|
Pods, services, replication controllers, etc. |
|
Rules for which users can or cannot perform actions on objects. |
|
Quotas for each kind of object that can be limited. |
|
Service accounts act automatically with designated access to objects in the project. |
Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects.
Developers and administrators can interact with projects by using the CLI or the web console.
OpenShift Container Platform comes with a number of default projects, and projects
starting with openshift-
are the most essential to users.
These projects host master components that run as pods and other infrastructure
components. The pods created in these namespaces that have a
critical pod annotation
are considered critical, and the have guaranteed admission by kubelet.
Pods created for master components in these namespaces are already marked as
critical.
Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: |
You can use the oc
CLI to view cluster roles and bindings by using the
oc describe
command.
Install the oc
CLI.
Obtain permission to view the cluster roles and bindings.
Users with the cluster-admin
default cluster role bound cluster-wide can
perform any action on any resource, including viewing cluster roles and bindings.
To view the cluster roles and their associated rule sets:
$ oc describe clusterrole.rbac
Name: admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
.packages.apps.redhat.com [] [] [* create update patch delete get list watch]
imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch]
imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch]
secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update]
buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch]
buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch]
buildlogs [] [] [create delete deletecollection get list patch update watch get list watch]
deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch]
deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch]
processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch]
routes [] [] [create delete deletecollection get list patch update watch get list watch]
templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch]
templateinstances [] [] [create delete deletecollection get list patch update watch get list watch]
templates [] [] [create delete deletecollection get list patch update watch get list watch]
deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch]
deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch]
buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch]
serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch]
imagestreams/secrets [] [] [create delete deletecollection get list patch update watch]
rolebindings [] [] [create delete deletecollection get list patch update watch]
roles [] [] [create delete deletecollection get list patch update watch]
rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch]
roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch]
imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch]
rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch]
roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch]
networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch]
networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch]
configmaps [] [] [create delete deletecollection patch update get list watch]
endpoints [] [] [create delete deletecollection patch update get list watch]
persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch]
pods [] [] [create delete deletecollection patch update get list watch]
replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch]
replicationcontrollers [] [] [create delete deletecollection patch update get list watch]
services [] [] [create delete deletecollection patch update get list watch]
daemonsets.apps [] [] [create delete deletecollection patch update get list watch]
deployments.apps/scale [] [] [create delete deletecollection patch update get list watch]
deployments.apps [] [] [create delete deletecollection patch update get list watch]
replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch]
replicasets.apps [] [] [create delete deletecollection patch update get list watch]
statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch]
statefulsets.apps [] [] [create delete deletecollection patch update get list watch]
horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch]
cronjobs.batch [] [] [create delete deletecollection patch update get list watch]
jobs.batch [] [] [create delete deletecollection patch update get list watch]
daemonsets.extensions [] [] [create delete deletecollection patch update get list watch]
deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch]
deployments.extensions [] [] [create delete deletecollection patch update get list watch]
ingresses.extensions [] [] [create delete deletecollection patch update get list watch]
replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch]
replicasets.extensions [] [] [create delete deletecollection patch update get list watch]
replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch]
poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch]
deployments.apps/rollback [] [] [create delete deletecollection patch update]
deployments.extensions/rollback [] [] [create delete deletecollection patch update]
catalogsources.operators.coreos.com [] [] [create update patch delete get list watch]
clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch]
installplans.operators.coreos.com [] [] [create update patch delete get list watch]
packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch]
subscriptions.operators.coreos.com [] [] [create update patch delete get list watch]
buildconfigs/instantiate [] [] [create]
buildconfigs/instantiatebinary [] [] [create]
builds/clone [] [] [create]
deploymentconfigrollbacks [] [] [create]
deploymentconfigs/instantiate [] [] [create]
deploymentconfigs/rollback [] [] [create]
imagestreamimports [] [] [create]
localresourceaccessreviews [] [] [create]
localsubjectaccessreviews [] [] [create]
podsecuritypolicyreviews [] [] [create]
podsecuritypolicyselfsubjectreviews [] [] [create]
podsecuritypolicysubjectreviews [] [] [create]
resourceaccessreviews [] [] [create]
routes/custom-host [] [] [create]
subjectaccessreviews [] [] [create]
subjectrulesreviews [] [] [create]
deploymentconfigrollbacks.apps.openshift.io [] [] [create]
deploymentconfigs.apps.openshift.io/instantiate [] [] [create]
deploymentconfigs.apps.openshift.io/rollback [] [] [create]
localsubjectaccessreviews.authorization.k8s.io [] [] [create]
localresourceaccessreviews.authorization.openshift.io [] [] [create]
localsubjectaccessreviews.authorization.openshift.io [] [] [create]
resourceaccessreviews.authorization.openshift.io [] [] [create]
subjectaccessreviews.authorization.openshift.io [] [] [create]
subjectrulesreviews.authorization.openshift.io [] [] [create]
buildconfigs.build.openshift.io/instantiate [] [] [create]
buildconfigs.build.openshift.io/instantiatebinary [] [] [create]
builds.build.openshift.io/clone [] [] [create]
imagestreamimports.image.openshift.io [] [] [create]
routes.route.openshift.io/custom-host [] [] [create]
podsecuritypolicyreviews.security.openshift.io [] [] [create]
podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create]
podsecuritypolicysubjectreviews.security.openshift.io [] [] [create]
jenkins.build.openshift.io [] [] [edit view view admin edit view]
builds [] [] [get create delete deletecollection get list patch update watch get list watch]
builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch]
projects [] [] [get delete get delete get patch update]
projects.project.openshift.io [] [] [get delete get delete get patch update]
namespaces [] [] [get get list watch]
pods/attach [] [] [get list watch create delete deletecollection patch update]
pods/exec [] [] [get list watch create delete deletecollection patch update]
pods/portforward [] [] [get list watch create delete deletecollection patch update]
pods/proxy [] [] [get list watch create delete deletecollection patch update]
services/proxy [] [] [get list watch create delete deletecollection patch update]
routes/status [] [] [get list watch update]
routes.route.openshift.io/status [] [] [get list watch update]
appliedclusterresourcequotas [] [] [get list watch]
bindings [] [] [get list watch]
builds/log [] [] [get list watch]
deploymentconfigs/log [] [] [get list watch]
deploymentconfigs/status [] [] [get list watch]
events [] [] [get list watch]
imagestreams/status [] [] [get list watch]
limitranges [] [] [get list watch]
namespaces/status [] [] [get list watch]
pods/log [] [] [get list watch]
pods/status [] [] [get list watch]
replicationcontrollers/status [] [] [get list watch]
resourcequotas/status [] [] [get list watch]
resourcequotas [] [] [get list watch]
resourcequotausages [] [] [get list watch]
rolebindingrestrictions [] [] [get list watch]
deploymentconfigs.apps.openshift.io/log [] [] [get list watch]
deploymentconfigs.apps.openshift.io/status [] [] [get list watch]
controllerrevisions.apps [] [] [get list watch]
rolebindingrestrictions.authorization.openshift.io [] [] [get list watch]
builds.build.openshift.io/log [] [] [get list watch]
imagestreams.image.openshift.io/status [] [] [get list watch]
appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch]
imagestreams/layers [] [] [get update get]
imagestreams.image.openshift.io/layers [] [] [get update get]
builds/details [] [] [update]
builds.build.openshift.io/details [] [] [update]
Name: basic-user
Labels: <none>
Annotations: openshift.io/description: A user that can get basic information about projects.
rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
selfsubjectrulesreviews [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.openshift.io [] [] [create]
clusterroles.rbac.authorization.k8s.io [] [] [get list watch]
clusterroles [] [] [get list]
clusterroles.authorization.openshift.io [] [] [get list]
storageclasses.storage.k8s.io [] [] [get list]
users [] [~] [get]
users.user.openshift.io [] [~] [get]
projects [] [] [list watch]
projects.project.openshift.io [] [] [list watch]
projectrequests [] [] [list]
projectrequests.project.openshift.io [] [] [list]
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]
...
To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles:
$ oc describe clusterrolebinding.rbac
Name: alertmanager-main
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: alertmanager-main
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount alertmanager-main openshift-monitoring
Name: basic-users
Labels: <none>
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: basic-user
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:authenticated
Name: cloud-credential-operator-rolebinding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cloud-credential-operator-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default openshift-cloud-credential-operator
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:masters
Name: cluster-admins
Labels: <none>
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:cluster-admins
User system:admin
Name: cluster-api-manager-rolebinding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cluster-api-manager-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount default openshift-machine-api
...
You can use the oc
CLI to view local roles and bindings by using the
oc describe
command.
Install the oc
CLI.
Obtain permission to view the local roles and bindings:
Users with the cluster-admin
default cluster role bound cluster-wide can
perform any action on any resource, including viewing local roles and bindings.
Users with the admin
default cluster role bound locally can view and manage
roles and bindings in that project.
To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project:
$ oc describe rolebinding.rbac
To view the local role bindings for a different project, add the -n
flag
to the command:
$ oc describe rolebinding.rbac -n joe-project
Name: admin
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: admin
Subjects:
Kind Name Namespace
---- ---- ---------
User kube:admin
Name: system:deployers
Labels: <none>
Annotations: openshift.io/description:
Allows deploymentconfigs in this namespace to rollout pods in
this namespace. It is auto-managed by a controller; remove
subjects to disa...
Role:
Kind: ClusterRole
Name: system:deployer
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount deployer joe-project
Name: system:image-builders
Labels: <none>
Annotations: openshift.io/description:
Allows builds in this namespace to push images to this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-builder
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount builder joe-project
Name: system:image-pullers
Labels: <none>
Annotations: openshift.io/description:
Allows all pods in this namespace to pull images from this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-puller
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:serviceaccounts:joe-project
You can use the oc adm
administrator CLI to manage the roles and bindings.
Binding, or adding, a role to users or groups gives the user or group the access
that is granted by the role. You can add and remove roles to and from users and
groups using oc adm policy
commands.
You can bind any of the default cluster roles to local users or groups in your project.
Add a role to a user in a specific project:
$ oc adm policy add-role-to-user <role> <user> -n <project>
For example, you can add the admin
role to the alice
user in joe
project
by running:
$ oc adm policy add-role-to-user admin alice -n joe
You can alternatively apply the following YAML to add the role to the user:
|
View the local role bindings and verify the addition in the output:
$ oc describe rolebinding.rbac -n <project>
For example, to view the local role bindings for the joe
project:
$ oc describe rolebinding.rbac -n joe
Name: admin
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: admin
Subjects:
Kind Name Namespace
---- ---- ---------
User kube:admin
Name: admin-0
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: admin
Subjects:
Kind Name Namespace
---- ---- ---------
User alice (1)
Name: system:deployers
Labels: <none>
Annotations: openshift.io/description:
Allows deploymentconfigs in this namespace to rollout pods in
this namespace. It is auto-managed by a controller; remove
subjects to disa...
Role:
Kind: ClusterRole
Name: system:deployer
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount deployer joe
Name: system:image-builders
Labels: <none>
Annotations: openshift.io/description:
Allows builds in this namespace to push images to this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-builder
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount builder joe
Name: system:image-pullers
Labels: <none>
Annotations: openshift.io/description:
Allows all pods in this namespace to pull images from this
namespace. It is auto-managed by a controller; remove subjects
to disable.
Role:
Kind: ClusterRole
Name: system:image-puller
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:serviceaccounts:joe
1 | The alice user has been added to the admins RoleBinding . |
You can create a local role for a project and then bind it to a user.
To create a local role for a project, run the following command:
$ oc create role <name> --verb=<verb> --resource=<resource> -n <project>
In this command, specify:
<name>
, the local role’s name
<verb>
, a comma-separated list of the verbs to apply to the role
<resource>
, the resources that the role applies to
<project>
, the project name
For example, to create a local role that allows a user to view pods in the
blue
project, run the following command:
$ oc create role podview --verb=get --resource=pod -n blue
To bind the new role to a user, run the following command:
$ oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue
You can create a cluster role.
To create a cluster role, run the following command:
$ oc create clusterrole <name> --verb=<verb> --resource=<resource>
In this command, specify:
<name>
, the local role’s name
<verb>
, a comma-separated list of the verbs to apply to the role
<resource>
, the resources that the role applies to
For example, to create a cluster role that allows a user to view pods, run the following command:
$ oc create clusterrole podviewonly --verb=get --resource=pod
When you manage a user or group’s associated roles for local role bindings using the
following operations, a project may be specified with the -n
flag. If it is
not specified, then the current project is used.
You can use the following commands for local RBAC management.
Command | Description |
---|---|
|
Indicates which users can perform an action on a resource. |
|
Binds a specified role to specified users in the current project. |
|
Removes a given role from specified users in the current project. |
|
Removes specified users and all of their roles in the current project. |
|
Binds a given role to specified groups in the current project. |
|
Removes a given role from specified groups in the current project. |
|
Removes specified groups and all of their roles in the current project. |
You can also manage cluster role bindings using the following
operations. The -n
flag is not used for these operations because
cluster role bindings use non-namespaced resources.
Command | Description |
---|---|
|
Binds a given role to specified users for all projects in the cluster. |
|
Removes a given role from specified users for all projects in the cluster. |
|
Binds a given role to specified groups for all projects in the cluster. |
|
Removes a given role from specified groups for all projects in the cluster. |
The cluster-admin
role is required to perform administrator
level tasks on the OpenShift Container Platform cluster, such as modifying
cluster resources.
You must have created a user to define as the cluster admin.
Define the user as a cluster admin:
$ oc adm policy add-cluster-role-to-user cluster-admin <user>
OpenShift Container Platform creates a cluster administrator, kubeadmin
, after the
installation process completes.
This user has the cluster-admin
role automatically applied and is treated
as the root user for the cluster. The password is dynamically generated
and unique to your OpenShift Container Platform environment. After installation
completes the password is provided in the installation program’s output.
For example:
INFO Install complete!
INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI.
INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes).
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com
INFO Login to the console with user: kubeadmin, password: <provided>
After you define an identity provider and create a new cluster-admin
user, you can remove the kubeadmin
to improve cluster security.
If you follow this procedure before another user is a |
You must have configured at least one identity provider.
You must have added the cluster-admin
role to a user.
You must be logged in as an administrator.
Remove the kubeadmin
secrets:
$ oc delete secrets kubeadmin -n kube-system
If you mirrored Operator catalogs for use with disconnected clusters, you can populate OperatorHub with the Operators from your mirrored catalogs. You can use the generated manifests from the mirroring process to create the required ImageContentSourcePolicy
and CatalogSource
objects.
After mirroring Operator catalog content to your mirror registry, create the required ImageContentSourcePolicy
(ICSP) object. The ICSP object configures nodes to translate between the image references stored in Operator manifests and the mirrored registry.
On a host with access to the disconnected cluster, create the ICSP by running the following command to specify the imageContentSourcePolicy.yaml
file in your manifests directory:
$ oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml
where <path/to/manifests/dir>
is the path to the manifests directory for your mirrored content.
You can now create a CatalogSource
object to reference your mirrored index image and Operator content.
Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users.
Cluster administrators
can create a CatalogSource
object that references an index image. OperatorHub uses catalog sources to populate the user interface.
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. |
You built and pushed an index image to a registry.
You have access to the cluster as a user with the cluster-admin
role.
Create a CatalogSource
object that references your index image.
If you used the oc adm catalog mirror
command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml
file in your manifests directory as a starting point.
Modify the following to your specifications and save it as a catalogSource.yaml
file:
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: my-operator-catalog (1)
namespace: openshift-marketplace (2)
spec:
sourceType: grpc
grpcPodConfig:
securityContextConfig: <security_mode> (3)
image: <registry>/<namespace>/redhat-operator-index:v4.15 (4)
displayName: My Operator Catalog
publisher: <publisher_name> (5)
updateStrategy:
registryPoll: (6)
interval: 30m
1 | If you mirrored content to local files before uploading to a registry, remove any backslash (/ ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. |
2 | If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. |
3 | Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . |
4 | Specify your index image. If you specify a tag after the image name, for example :v4.15 , the catalog source pod uses an image pull policy of Always , meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id> , the image pull policy is IfNotPresent , meaning the pod pulls the image only if it does not already exist on the node. |
5 | Specify your name or an organization name publishing the catalog. |
6 | Catalog sources can automatically check for new versions to keep up to date. |
Use the file to create the CatalogSource
object:
$ oc apply -f catalogSource.yaml
Verify the following resources are created successfully.
Check the pods:
$ oc get pods -n openshift-marketplace
NAME READY STATUS RESTARTS AGE
my-operator-catalog-6njx6 1/1 Running 0 28s
marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h
Check the catalog source:
$ oc get catalogsource -n openshift-marketplace
NAME DISPLAY TYPE PUBLISHER AGE
my-operator-catalog My Operator Catalog grpc 5s
Check the package manifest:
$ oc get packagemanifest -n openshift-marketplace
NAME CATALOG AGE
jaeger-product My Operator Catalog 93s
You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.
OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.
As a cluster administrator, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster.
During installation, you must determine the following initial settings for the Operator:
Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces… to make the Operator available to all users and projects.
If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
You can choose automatic or manual updates.
If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.
If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console.
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
Navigate in the web console to the Operators → OperatorHub page.
Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type jaeger
to find the Jaeger Operator.
You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
Select the Operator to display additional information.
Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. |
Read the information about the Operator and click Install.
On the Install Operator page, configure your Operator installation:
If you want to install a specific version of an Operator, select an Update channel and Version from the lists. You can browse the various versions of an Operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install.
The version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise, Manual approval is required when not installing the latest version for the selected channel. Installing an Operator with Manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. If you want to update Operators independently, install Operators into separate namespaces. |
Confirm the installation mode for the Operator:
All namespaces on the cluster (default) installs the Operator in the default openshift-operators
namespace to watch and be made available to all namespaces in the cluster. This option is not always available.
A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
For clusters on cloud providers with token authentication enabled:
If the cluster uses AWS STS (STS Mode in the web console), enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field. To create the role’s ARN, follow the procedure described in Preparing AWS account.
If the cluster uses Microsoft Entra Workload ID (Workload Identity / Federated Identity Mode in the web console), add the client ID, tenant ID, and subscription ID in the appropriate field.
For Update approval, select either the Automatic or Manual approval strategy.
If the web console shows that the cluster uses AWS STS or Microsoft Entra Workload ID, you must set Update approval to Manual. Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. |
Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster:
If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.
After approving on the Install Plan page, the subscription upgrade status moves to Up to date.
If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should eventually resolve to Succeeded in the relevant namespace.
For the All namespaces… installation mode, the status resolves to Succeeded in the |
If it does not:
Check the logs in any pods in the openshift-operators
project (or other relevant namespace if A specific namespace… installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.
When the Operator is installed, the metadata indicates which channel and version are installed.
The Channel and Version dropdown menus are still available for viewing other version metadata in this catalog context. |
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc
command to create or update a Subscription
object.
For SingleNamespace
install mode, you must also ensure an appropriate Operator group exists in the related namespace. An Operator group, defined by an OperatorGroup
object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.
In most cases, the web console method of this procedure is preferred because it automates tasks in the background, such as handling the creation of |
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
You have installed the OpenShift CLI (oc
).
View the list of Operators available to the cluster from OperatorHub:
$ oc get packagemanifests -n openshift-marketplace
NAME CATALOG AGE
3scale-operator Red Hat Operators 91m
advanced-cluster-management Red Hat Operators 91m
amq7-cert-manager Red Hat Operators 91m
# ...
couchbase-enterprise-certified Certified Operators 91m
crunchy-postgres-operator Certified Operators 91m
mongodb-enterprise Certified Operators 91m
# ...
etcd Community Operators 91m
jaeger Community Operators 91m
kubefed Community Operators 91m
# ...
Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
$ oc describe packagemanifests <operator_name> -n openshift-marketplace
# ...
Kind: PackageManifest
# ...
Install Modes: (1)
Supported: true
Type: OwnNamespace
Supported: true
Type: SingleNamespace
Supported: false
Type: MultiNamespace
Supported: true
Type: AllNamespaces
# ...
Entries:
Name: example-operator.v3.7.11
Version: 3.7.11
Name: example-operator.v3.7.10
Version: 3.7.10
Name: stable-3.7 (2)
# ...
Entries:
Name: example-operator.v3.8.5
Version: 3.8.5
Name: example-operator.v3.8.4
Version: 3.8.4
Name: stable-3.8 (2)
Default Channel: stable-3.8 (3)
1 | Indicates which install modes are supported. |
2 | Example channel names. |
3 | The channel selected by default if one is not specified. |
You can print an Operator’s version and channel information in YAML format by running the following command:
|
If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog:
$ oc get packagemanifest \
--selector=catalog=<catalogsource_name> \
--field-selector metadata.name=<operator_name> \
-n <catalog_namespace> -o yaml
If you do not specify the Operator’s catalog, running the
|
If the Operator you intend to install supports the AllNamespaces
install mode, and you choose to use this mode, skip this step, because the openshift-operators
namespace already has an appropriate Operator group in place by default, called global-operators
.
If the Operator you intend to install supports the SingleNamespace
install mode, and you choose to use this mode, you must ensure an appropriate Operator group exists in the related namespace. If one does not exist, you can create create one by following these steps:
You can only have one Operator group per namespace. For more information, see "Operator groups". |
Create an OperatorGroup
object YAML file, for example operatorgroup.yaml
, for SingleNamespace
install mode:
OperatorGroup
object for SingleNamespace
install modeapiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: <operatorgroup_name>
namespace: <namespace> (1)
spec:
targetNamespaces:
- <namespace> (1)
1 | For SingleNamespace install mode, use the same <namespace> value for both the metadata.namespace and spec.targetNamespaces fields. |
Create the OperatorGroup
object:
$ oc apply -f operatorgroup.yaml
Create a Subscription
object to subscribe a namespace to an Operator:
Create a YAML file for the Subscription
object, for example subscription.yaml
:
If you want to subscribe to a specific version of an Operator, set the |
Subscription
objectapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: <subscription_name>
namespace: <namespace_per_install_mode> (1)
spec:
channel: <channel_name> (2)
name: <operator_name> (3)
source: <catalog_name> (4)
sourceNamespace: <catalog_source_namespace> (5)
config:
env: (6)
- name: ARGS
value: "-v=10"
envFrom: (7)
- secretRef:
name: license-secret
volumes: (8)
- name: <volume_name>
configMap:
name: <configmap_name>
volumeMounts: (9)
- mountPath: <directory_name>
name: <volume_name>
tolerations: (10)
- operator: "Exists"
resources: (11)
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
nodeSelector: (12)
foo: bar
1 | For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. For SingleNamespace install mode usage, specify the relevant single namespace. |
2 | Name of the channel to subscribe to. |
3 | Name of the Operator to subscribe to. |
4 | Name of the catalog source that provides the Operator. |
5 | Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. |
6 | The env parameter defines a list of environment variables that must exist in all containers in the pod created by OLM. |
7 | The envFrom parameter defines a list of sources to populate environment variables in the container. |
8 | The volumes parameter defines a list of volumes that must exist on the pod created by OLM. |
9 | The volumeMounts parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. |
10 | The tolerations parameter defines a list of tolerations for the pod created by OLM. |
11 | The resources parameter defines resource constraints for all the containers in the pod created by OLM. |
12 | The nodeSelector parameter defines a NodeSelector for the pod created by OLM. |
Subscription
object with a specific starting Operator versionapiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: example-operator
namespace: example-operator
spec:
channel: stable-3.7
installPlanApproval: Manual (1)
name: example-operator
source: custom-operators
sourceNamespace: openshift-marketplace
startingCSV: example-operator.v3.7.10 (2)
1 | Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. |
2 | Set a specific version of an Operator CSV. |
For clusters on cloud providers with token authentication enabled, configure your Subscription
object by following these steps:
Ensure the Subscription
object is set to manual update approvals:
kind: Subscription
# ...
spec:
installPlanApproval: Manual (1)
1 | Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. |
Include the relevant cloud provider-specific fields in the Subscription
object’s config
section:
If the cluster is in AWS STS mode, include the following fields:
kind: Subscription
# ...
spec:
config:
env:
- name: ROLEARN
value: "<role_arn>" (1)
1 | Include the role ARN details. |
If the cluster is in Microsoft Entra Workload ID mode, include the following fields:
kind: Subscription
# ...
spec:
config:
env:
- name: CLIENTID
value: "<client_id>" (1)
- name: TENANTID
value: "<tenant_id>" (2)
- name: SUBSCRIPTIONID
value: "<subscription_id>" (3)
1 | Include the client ID. |
2 | Include the tenant ID. |
3 | Include the subscription ID. |
Create the Subscription
object by running the following command:
$ oc apply -f subscription.yaml
If you set the installPlanApproval
field to Manual
, manually approve the pending install plan to complete the Operator installation. For more information, see "Manually approving a pending Operator update".
At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
Check the status of the Subscription
object for your installed Operator by running the following command:
$ oc describe subscription <subscription_name> -n <namespace>
If you created an Operator group for SingleNamespace
install mode, check the status of the OperatorGroup
object by running the following command:
$ oc describe operatorgroup <operatorgroup_name> -n <namespace>