apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: "10"
How can I create limits for users and projects?
You can place limits within your OpenShift cluster using ResourceQuotas and LimitRanges. These quotas and limits allow you to control pod and container limits, object counts, and compute resources. Currently, these limits and quotas only apply to projects and not to users. However, you can make a quota-like limit on how many project requests a user can make.
Creating a quota in a project to limit the number of pods
To create a quota in the "awesomeproject" that limits the number of pods that can be created to a maximum of 10:
Create a resource-quota.yaml file with the following contents:
apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: "10"
Create the quota using the file you just wrote to apply it to the "awesomeproject":
$ oc create -f resource-quota.yaml -n awesomeproject
After the quota has been in effect for a little while, you can view the usage statistics for the hard limit set on pods.
If required, list the quotas defined in the project to see the names of all defined quotas:
$ oc get quota -n awesomeproject NAME AGE resource-quota 39m
Describe the resource quota for which you want statistics:
$ oc describe quota resource-quota -n awesomeproject Name: resource-quota Namespace: awesomeproject Resource Used Hard -------- ---- ---- pods 3 10
Optionally, you can configure the quota synchronization period, which controls how long to wait before restoring quota usage after resources are deleted.
If you want to remove an active quota to no longer enforce the limits of a project:
$ oc delete quota <quota_name>
The procedure above is just a basic example. The following are references to all the available options for limits and quotas:
This LimitRange example explains all the container limits and pod limits that you can place within your project:
apiVersion: "v1"
kind: "LimitRange"
metadata:
name: "core-resource-limits" (1)
spec:
limits:
- type: "Pod"
max:
cpu: "2" (2)
memory: "1Gi" (3)
min:
cpu: "200m" (4)
memory: "6Mi" (5)
- type: "Container"
max:
cpu: "2" (6)
memory: "1Gi" (7)
min:
cpu: "100m" (8)
memory: "4Mi" (9)
default:
cpu: "300m" (10)
memory: "200Mi" (11)
defaultRequest:
cpu: "200m" (12)
memory: "100Mi" (13)
maxLimitRequestRatio:
cpu: "10" (14)
1 | The name of the limit range object. |
2 | The maximum amount of CPU that a pod can request on a node across all containers. |
3 | The maximum amount of memory that a pod can request on a node across all containers. |
4 | The minimum amount of CPU that a pod can request on a node across all containers. |
5 | The minimum amount of memory that a pod can request on a node across all containers. |
6 | The maximum amount of CPU that a single container in a pod can request. |
7 | The maximum amount of memory that a single container in a pod can request. |
8 | The minimum amount of CPU that a single container in a pod can request. |
9 | The minimum amount of memory that a single container in a pod can request. |
10 | The default amount of CPU that a container will be limited to use if not specified. |
11 | The default amount of memory that a container will be limited to use if not specified. |
12 | The default amount of CPU that a container will request to use if not specified. |
13 | The default amount of memory that a container will request to use if not specified. |
14 | The maximum amount of CPU burst that a container can make as a ratio of its limit over request. |
apiVersion: "v1"
kind: "LimitRange"
metadata:
name: "openshift-resource-limits"
spec:
limits:
- type: openshift.io/Image
max:
storage: 1Gi (1)
- type: openshift.io/ImageStream
max:
openshift.io/image-tags: 20 (2)
openshift.io/images: 30 (3)
1 | The maximum size of an image that can be pushed to an internal registry. |
2 | The maximum number of unique image tags per image stream’s spec. |
3 | The maximum number of unique image references per image stream’s status. |
These ResourceQuota examples explain all the Object Counts and Compute Resources that you can place within your project:
apiVersion: v1
kind: ResourceQuota
metadata:
name: core-object-counts
spec:
hard:
configmaps: "10" (1)
persistentvolumeclaims: "4" (2)
replicationcontrollers: "20" (3)
secrets: "10" (4)
services: "10" (5)
1 | The total number of ConfigMap objects that can exist in the project. |
2 | The total number of persistent volume claims (PVCs) that can exist in the project. |
3 | The total number of replication controllers that can exist in the project. |
4 | The total number of secrets that can exist in the project. |
5 | The total number of services that can exist in the project. |
apiVersion: v1
kind: ResourceQuota
metadata:
name: openshift-object-counts
spec:
hard:
openshift.io/imagestreams: "10" (1)
1 | The total number of image streams that can exist in the project. |
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
pods: "4" (1)
requests.cpu: "1" (2)
requests.memory: 1Gi (3)
limits.cpu: "2" (4)
limits.memory: 2Gi (5)
1 | The total number of pods in a non-terminal state that can exist in the project. |
2 | Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core. |
3 | Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi. |
4 | Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores. |
5 | Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi. |
You can
limit the number of projects that a user may request by categorizing users with label selectors with the oc label
command. A label selector consists of the label name and the label value:
label=value
Once users are labeled, you must modify the default project template in the master-config.yaml file using an admission control plug-in. This allows some users to create more projects than others, and you can define different values (or levels) for each label.
Limiting how many projects a user can request by defining three different privilege levels
The label is named level
, and the possible values are bronze
, silver
,
gold
, and platinum
. Platinum users do not have a maximum number of project
requests, gold users can request up to 10 projects, silver users up to 7
projects, bronze users up to 5 projects, and any users without a label are by
default only allowed 2 projects.
Each user can only have one value per label. For example, a user cannot be both
gold
and silver
for the level label. However, when configuring the
master-config.yaml file, you could select users that have any value for a
label with a wildcard; for example, level=*
.
To define privilege levels for project requests:
Apply label selectors to users. For example, to apply the level
label selector with a value of bronze
:
$ oc label user <user_name> level=bronze
Repeat this step for all bronze users, and then for the other levels.
Optionally, verify the previous step by viewing the list of labeled users for each value:
$ oc get users -l level=bronze $ oc get users -l level=silver $ oc get users -l level=gold $ oc get users -l level=platinum
If you need to remove a label from a user to make a correction:
$ oc label user <user_name> level-
Modify the master-config.yaml file to define project limits for this label with the numbers stated in this use case. Find the admissionConfig
line and create the configuration below it:
admissionConfig:
pluginConfig:
ProjectRequestLimit:
configuration:
apiVersion: v1
kind: ProjectRequestLimitConfig
limits:
- selector:
level: platinum
- selector:
level: gold
maxProjects: 10
- selector:
level: silver
maxProjects: 7
- selector:
level: bronze
maxProjects: 5
- maxProjects: 2
Restart the master host for the changes to take effect.
$ systemctl restart atomic-openshift-master
If you use a custom project template to limit the number of projects per user, then you must ensure that you keep the modifications by including the following: ProjectRequester = "openshift.io/requester" Ownership is established using the |
If you configure a project to have ResourceQuota restrictions, then the amount of the defined quota currently being used is stored on the ResourceQuota object itself. In that case, you could check the amount of used resources, such as CPU usage:
$ oc get quota
However, this would not tell you what is actually being consumed.
To determine what is actually being consumed, use the oc describe
command:
$ oc describe quota <quota-name>
Alternatively, you can set up cluster metrics for more detailed statistics.
When a user first logs in, there is a default set of permissions that is applied to that user. The scope of permissions that a user can have is controlled by the various types of roles within OpenShift:
ClusterRoles
ClusterRoleBindings
Roles
(project-scoped)
RoleBindings
(project-scoped)
You may want to modify the default set of permissions. In order to do this, it’s important to understand the default groups and roles assigned, and to be aware of the roles and users bound to each project or the entire cluster.
There are special groups that are assigned to users. You can target users with these groups, but you cannot modify them. These special groups are as follows:
Group | Description |
---|---|
|
This is assigned to all users who are identifiable to the API. Everyone who is not |
|
This is assigned to all users who have identified using an oauth token issued by the embedded oauth server. This is not applied to service accounts (they use service account tokens), or certificate users. |
|
This is assigned to users who have not presented credentials. Invalid credentials are rejected with a 401 error, so this is specifically users who did not try to authenticate at all. |
You may find it helpful to target users with the special groups listed above.
For example, you could share a template with all users by granting
system:authenticated
access to the template.
The "default" permissions of users are defined by which roles are bound to the
system:authenticated
and sytem:authenticated:oauth
groups. As mentioned
above, you are not able to modify membership to these groups, but you can
change the roles bound to these groups.
For example, to bind a role to the system:authenticated
group for all projects
in the cluster:
$ oadm policy add-cluster-role-to-group <role> system:authenticated
Currently, by default the system:authenticated
and sytem:authenticated:oauth
groups receive the following roles:
Role | Description |
---|---|
|
For the |
|
For the the entire cluster. Allows users to see their own account, check for information about requesting projects, see which projects they can view, and check their own permissions. |
|
Allows users to request projects. |
|
Allows users to delete any oauth token for which they know the details. |
|
Allows users to see which APIs are enabled, and basic API server information such as versions. |
|
Allows users to hit the webhooks for a build if they have enough additional information. |
To view a list of all users that are bound to the project and their roles:
$ oc get rolebindings NAME ROLE USERS GROUPS SERVICE ACCOUNTS SUBJECTS system:image-pullers /system:image-puller system:serviceaccounts:asdfasdf4asdf admin /admin jsmith system:deployers /system:deployer deployer system:image-builders /system:image-builder builder
To view a list of users and what they have access to across the entire cluster:
$ oc get clusterrolebindings NAME ROLE USERS GROUPS SERVICE ACCOUNTS SUBJECTS system:job-controller /system:job-controller openshift-infra/job-controller system:build-controller /system:build-controller openshift-infra/build-controller system:node-admins /system:node-admin system:master system:node-admins registry-registry-role /system:registry default/registry system:pv-provisioner-controller /system:pv-provisioner-controller openshift-infra/pv-provisioner-controller basic-users /basic-user system:authenticated system:namespace-controller /system:namespace-controller openshift-infra/namespace-controller system:discovery-binding /system:discovery system:authenticated, system:unauthenticated system:build-strategy-custom-binding /system:build-strategy-custom system:authenticated cluster-status-binding /cluster-status system:authenticated, system:unauthenticated system:webhooks /system:webhook system:authenticated, system:unauthenticated system:gc-controller /system:gc-controller openshift-infra/gc-controller cluster-readers /cluster-reader system:cluster-readers system:pv-recycler-controller /system:pv-recycler-controller openshift-infra/pv-recycler-controller system:daemonset-controller /system:daemonset-controller openshift-infra/daemonset-controller cluster-admins /cluster-admin system:admin system:cluster-admins system:hpa-controller /system:hpa-controller openshift-infra/hpa-controller system:build-strategy-source-binding /system:build-strategy-source system:authenticated system:replication-controller /system:replication-controller openshift-infra/replication-controller system:sdn-readers /system:sdn-reader system:nodes system:build-strategy-docker-binding /system:build-strategy-docker system:authenticated system:routers /system:router system:routers system:oauth-token-deleters /system:oauth-token-deleter system:authenticated, system:unauthenticated system:node-proxiers /system:node-proxier system:nodes system:nodes /system:node system:nodes self-provisioners /self-provisioner system:authenticated:oauth system:service-serving-cert-controller /system:service-serving-cert-controller openshift-infra/service-serving-cert-controller system:registrys /system:registry system:registries system:pv-binder-controller /system:pv-binder-controller openshift-infra/pv-binder-controller system:build-strategy-jenkinspipeline-binding /system:build-strategy-jenkinspipeline system:authenticated system:deployment-controller /system:deployment-controller openshift-infra/deployment-controller system:masters /system:master system:masters system:service-load-balancer-controller /system:service-load-balancer-controller openshift-infra/service-load-balancer-controller
These commands can generate huge lists, so you may want to pipe the output into a text file that you can search through more easily.
You can define roles (or permissions) for a user before their initial log in so they can start working immediately. You can assign many different types of roles to users such as admin, basic-user, self-provisioner, and cluster-reader.
For a complete list of all available roles:
$ oadm policy
The following section includes examples of some common operations related to adding (binding) and removing roles from users and groups. For a complete list of available local policy operations, see Managing Role Bindings.
To bind a role to a user for the current project:
$ oadm policy add-role-to-user <role> <user_name>
You can specify a project with the -n
flag.
To remove a role from a user for the current project:
$ oadm policy remove-role-from-user <role> <user_name>
You can specify a project with the -n
flag.
To bind a cluster role to a user for all projects:
$ oadm policy add-cluster-role-to-user <role> <user_name>
To remove a cluster role from a user for all projects:
$ oadm policy remove-cluster-role-from-user <role> <user_name>
To bind a role to a specified group in the current project:
$ oadm policy add-role-to-group <role> <groupname>
You can specify a project with the -n
flag.
To remove a role from a specified group in the current project:
$ oadm policy remove-role-from-group <role> <groupname>
You can specify a project with the -n
flag.
Templates are project-scoped resources, so you cannot create them to be readily
available at a cluster level. The easiest way to share templates across the entire
cluster is with the openshift
project, which by default is already set up to
share templates. The templates can be annotated, and are displayed in the web
console where users can access them. Users have get
access only to the
templates and images in this project, via the shared-resource-viewer
role.
The shared-resource-viewer
role exists to allow templates to be shared across
project boundaries. Users with this role have the ability to see all existing
templates and pull images from that project. However, the user still needs to
know which project to look in, because they will not be able to view the project
in their oc get projects
list.
By default, this role is granted to the system:authenticated
group in the
openshift
project. This allows users to process the
specified template from the openshift
project and create the items in the
current project:
$ oc process openshift//<template-name> | oc create -f -
You can also add the registry viewer role to a user, allowing them to view and pull images from a project:
$ oc policy add-role-to-user registry-viewer <user-name>
Cluster administrator is a very powerful role, which has ultimate control
within the cluster, including the power to destroy that cluster. You can grant
this role to other users if they absolutely need to have ultimate control.
However, you may first want to examine the other available roles if you do not
want to create such a powerful user. For example, admin
is a constrained
role that has the power to do many things inside of their project, but cannot
affect (or destroy) the entire cluster.
To create a basic administrator role within a project:
$ oadm policy add-role-to-user admin <user_name> -n <project_name>
To create a cluster administrator with ultimate control over the cluster:
Be very careful when granting cluster administrator role to a user. Ensure that the user truly needs that level of power within the cluster. When OpenShift is first installed, a certificate based user is created and the credentials are saved in admin.kubeconfig. This cluster administrator user can do absolutely anything to any resource on the entire cluster, which can result in destruction if not used carefully. |
$ oadm policy add-cluster-role-to-user cluster-admin <user_name>