$ docker pull registry.access.redhat.com/openshift3/jenkins-1-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-2-rhel7
OpenShift Container Platform provides a container image for running Jenkins. This image provides a Jenkins server instance, which can be used to set up a basic flow for continuous testing, integration, and delivery.
This image also includes a sample Jenkins job, which triggers a new build of a
BuildConfig
defined in OpenShift Container Platform, tests the output of that build, and
then on successful build, retags the output to indicate the build is ready for
production.
OpenShift Container Platform follows the LTS releases of Jenkins. Currently, OpenShift Container Platform provides versions 1.x and 2.x.
These images come in two flavors, depending on your needs:
RHEL 7
CentOS 7
RHEL 7 Based Images
The RHEL 7 images are available through the Red Hat Registry:
$ docker pull registry.access.redhat.com/openshift3/jenkins-1-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-2-rhel7
CentOS 7 Based Images
This image is available on Docker Hub:
$ docker pull openshift/jenkins-1-centos7 $ docker pull openshift/jenkins-2-centos7
To use these images, you can either access them directly from these registries or push them into your OpenShift Container Platform Docker registry. Additionally, you can create an ImageStream that points to the image, either in your Docker registry or at the external location. Your OpenShift Container Platform resources can then reference the ImageStream. You can find example ImageStream definitions for all the provided OpenShift Container Platform images.
You can manage Jenkins authentication in two ways:
OpenShift Container Platform OAuth authentication provided by the OpenShift Login plug-in.
Standard authentication provided by Jenkins
OAuth
authentication is activated by configuring the Configure Global Security
panel in the Jenkins UI, or by setting the OPENSHIFT_ENABLE_OAUTH
environment variable on the Jenkins Deployment Config
to anything other than
false
. This activates the OpenShift Login plug-in, which retrieves the
configuration information from pod data or by interacting with the
OpenShift Container Platform API server.
Valid credentials are controlled by the OpenShift Container Platform identity provider.
For example, if Allow All
is the default identity provider, you can provide
any non-empty string for both the user name and password.
Jenkins supports both browser and non-browser access.
Valid users are automatically added to the Jenkins authorization matrix at log
in, where OpenShift Container Platform Roles
dictate the specific Jenkins permissions the
user will have.
Users with the admin
role will have the traditional Jenkins administrative
user permissions. Users with the edit
or view
role will have progressively
less permissions. See the
Jenkins image source
repository README for the specifics on the OpenShift roles to Jenkins
permissions mappings.
The |
Jenkins' users permissions can be changed after the users are initially established. The OpenShift Login plug-in polls the OpenShift Container Platform API server for permissions and updates the permissions stored in Jenkins for each user with the permissions retrieved from OpenShift Container Platform. If the Jenkins UI is used to update permissions for a Jenkins user, the permission changes are overwritten the next time the plug-in polls OpenShift Container Platform.
You can control how often the polling occurs with the
OPENSHIFT_PERMISSIONS_POLL_INTERVAL
environment variable. The default polling
interval is five minutes.
The easiest way to create a new Jenkins service using OAuth authentication is to use a template as described below.
Jenkins authentication is used by default if the image is run directly, without using a template.
The first time Jenkins starts, the configuration is created along with the
administrator user and password. The default user credentials are admin
and
password
. Configure the default password by setting the JENKINS_PASSWORD
environment variable when using (and only when using) standard Jenkins
authentication.
To create a new Jenkins application using standard Jenkins authentication:
$ oc new-app -e \ JENKINS_PASSWORD=<password> \ openshift/jenkins-1-centos7
The Jenkins server can be configured with the following environment variables:
Variable name | Description |
---|---|
|
The password for the |
|
Determines whether the OpenShift Login plug-in manages authentication when logging into Jenkins. Enabled when set to any non-empty value other than "false". |
|
Specifies in seconds how often the OpenShift Login plug-in polls OpenShift Container Platform for the permissions associated with each user defined in Jenkins. |
If you are going to run Jenkins somewhere other than as a deployment within your same project, you will need to provide an access token to Jenkins to access your project.
Identify the secret for the service account that has appropriate permissions to access the project Jenkins needs to access:
$ oc describe serviceaccount default Name: default Labels: <none> Secrets: { default-token-uyswp } { default-dockercfg-xcr3d } Tokens: default-token-izv1u default-token-uyswp
In this case the secret is named default-token-uyswp
Retrieve the token from the secret:
$ oc describe secret <secret name from above> # e.g. default-token-izv1u Name: default-token-izv1u Labels: <none> Annotations: kubernetes.io/service-account.name=default,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA
The token field contains the token value Jenkins needs to access the project.
Templates provide parameter fields to define all the environment variables (password) with predefined defaults. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should have been registered in the default openshift project by your cluster administrator during the initial cluster setup. See Loading the Default Image Streams and Templates for more details, if required.
The two available templates both define a deployment configuration and a service. The templates differ in their storage strategy, which affects whether or not the Jenkins content persists across a pod restart.
A pod may be restarted when it is moved to another node, or when an update of the deployment configuration triggers a redeployment. |
jenkins-ephemeral
uses ephemeral storage. On pod restart, all data is lost.
This template is useful for development or testing only.
jenkins-persistent
uses a persistent volume store. Data survives a pod
restart.
To use a persistent volume store, the cluster administrator must
define a persistent volume pool in the OpenShift Container Platform deployment.
Once you have selected which template you want, you must instantiate the template to be able to use Jenkins:
Ensure the the default image streams and templates are already installed.
Create a new Jenkins application using:
A persistent volume:
$ oc new-app jenkins-persistent
Or an emptyDir
type volume (where configuration does not persist across pod restarts):
$ oc new-app jenkins-ephemeral
If you instantiate the template against releases prior to v3.4 of
OpenShift Container Platform, standard Jenkins authentication is used, and the default
|
To customize the official OpenShift Container Platform Jenkins image, you have two options:
Use Docker layering.
Use the image as a Source-To-Image builder, described here.
You can use S2I to copy your custom Jenkins Jobs definitions, additional plug-ins or replace the provided config.xml file with your own, custom, configuration.
In order to include your modifications in the Jenkins image, you need to have a Git repository with the following directory structure:
This directory contains those binary Jenkins plug-ins you want to copy into Jenkins.
This file lists the plug-ins you want to install:
pluginId:pluginVersion
This directory contains the Jenkins job definitions.
This file contains your custom Jenkins configuration.
The contents of the configuration/ directory will be copied into the /var/lib/jenkins/ directory, so you can also include additional files, such as credentials.xml, there.
The following is an example build configuration that customizes the Jenkins image in OpenShift Container Platform:
apiVersion: v1
kind: BuildConfig
metadata:
name: custom-jenkins-build
spec:
source: (1)
git:
uri: https://github.com/custom/repository
type: Git
strategy: (2)
sourceStrategy:
from:
kind: ImageStreamTag
name: jenkins:latest
namespace: openshift
type: Source
output: (3)
to:
kind: ImageStreamTag
name: custom-jenkins:latest
1 | The source field defines the source Git repository
with the layout described above. |
2 | The strategy field defines the original Jenkins image to use
as a source image for the build. |
3 | The output field defines the resulting, customized Jenkins image
you can use in deployment configuration instead of the official Jenkins image. |
The official OpenShift Container Platform Jenkins image includes the pre-installed Kubernetes plug-in that allows Jenkins slaves to be dynamically provisioned on multiple container hosts using Kubernetes and OpenShift Container Platform.
To use the Kubernetes plug-in, OpenShift Container Platform provides three images suitable for use as Jenkins slaves: the Base, Maven, and Node.js images.
The first is a base image for Jenkins slaves:
It pulls in both the required tools (headless Java, the Jenkins JNLP client) and the useful ones (including git, tar, zip, and nss among others).
It establishes the JNLP slave agent as the entrypoint.
It includes the oc client tooling for invoking command line operations from within Jenkins jobs, and
It provides Dockerfiles for both CentOS and RHEL images.
Two additional images that extend the base image are also provided:
Both the Maven and Node.js slave images are configured as Kubernetes Pod Template images within the OpenShift Container Platform Jenkins image’s configuration for the Kubernetes plug-in. That configuration includes labels for each of the images that can be applied to any of your Jenkins jobs under their "Restrict where this project can be run" setting. If the label is applied, execution of the given job will be done under an OpenShift Container Platform pod running the respective slave image.
The Maven and Node.js Jenkins slave images provide Dockerfiles for both CentOS
and RHEL that you can reference when building new slave images. Also note the
contrib
and contrib/bin
subdirectories. They allow for the insertion of
configuration files and executable scripts for your image.
The Jenkins image also provides auto-discovery and auto-configuration of slave images for the Kubernetes plug-in. With the OpenShift Sync plug-in, the Jenkins image on Jenkins start-up searches within the project that it is running, or the projects specifically listed in the plug-in’s configuration for the following:
Image streams that have the label role
set to jenkins-slave
.
Image stream tags that have the annotation role
set to jenkins-slave
.
ConfigMaps that have the label role
set to jenkins-slave
.
When it finds an image stream with the appropriate label, or image stream tag with the appropriate annotation, it generates the corresponding Kubernetes plug-in configuration so you can assign your Jenkins jobs to run in a pod running the container image provided by the image stream.
The name and image references of the image stream or image stream tag are mapped
to the name and image fields in the Kubernetes plug-in pod template. You can
control the label field of the Kubernetes plug-in pod template by setting an
annotation on the image stream or image stream tag object with the key
slave-label
. Otherwise, the name is used as the label.
When it finds a ConfigMap with the appropriate label, it assumes that any values in the key-value data payload of the ConfigMap contains XML consistent with the config format for Jenkins and the Kubernetes plug-in pod templates. A key differentiator to note when using ConfigMaps, instead of image streams or image stream tags, is that you can control all the various fields of the Kubernetes plug-in pod template.
The following is an example ConfigMap:
apiVersion: v1
items:
- apiVersion: v1
data:
template1: |-
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
<inheritFrom></inheritFrom>
<name>template1</name>
<instanceCap>2147483647</instanceCap>
<idleMinutes>0</idleMinutes>
<label>template1</label>
<serviceAccount>jenkins</serviceAccount>
<nodeSelector></nodeSelector>
<volumes/>
<containers>
<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
<name>jnlp</name>

<privileged>false</privileged>
<alwaysPullImage>false</alwaysPullImage>
<workingDir>/tmp</workingDir>
<command></command>
<args>${computer.jnlpmac} ${computer.name}</args>
<ttyEnabled>false</ttyEnabled>
<resourceRequestCpu></resourceRequestCpu>
<resourceRequestMemory></resourceRequestMemory>
<resourceLimitCpu></resourceLimitCpu>
<resourceLimitMemory></resourceLimitMemory>
<envVars/>
</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
</containers>
<envVars/>
<annotations/>
<imagePullSecrets/>
<nodeProperties/>
</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
template2: |-
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
<inheritFrom></inheritFrom>
<name>template2</name>
<instanceCap>2147483647</instanceCap>
<idleMinutes>0</idleMinutes>
<label>template2</label>
<serviceAccount>jenkins</serviceAccount>
<nodeSelector></nodeSelector>
<volumes/>
<containers>
<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
<name>jnlp</name>

<privileged>false</privileged>
<alwaysPullImage>false</alwaysPullImage>
<workingDir>/tmp</workingDir>
<command></command>
<args>${computer.jnlpmac} ${computer.name}</args>
<ttyEnabled>false</ttyEnabled>
<resourceRequestCpu></resourceRequestCpu>
<resourceRequestMemory></resourceRequestMemory>
<resourceLimitCpu></resourceLimitCpu>
<resourceLimitMemory></resourceLimitMemory>
<envVars/>
</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
</containers>
<envVars/>
<annotations/>
<imagePullSecrets/>
<nodeProperties/>
</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
kind: ConfigMap
metadata:
labels:
role: jenkins-slave
name: jenkins-slave
namespace: myproject
kind: List
metadata: {}
resourceVersion: ""
selfLink: ""
After startup, the
OpenShift Sync plug-in
monitors the API server of OpenShift Container Platform for updates to ImageStreams
,
ImageStreamTags
, and ConfigMaps
and adjusts the configuration of the
Kubernetes plug-in.
In particular, the following rules will apply:
Removal of the label or annotation from the ConfigMap
, ImageStream
, or
ImageStreamTag
will result in the deletion of any existing PodTemplate
from
the configuration of the Kubernetes plug-in.
Similarly, if those objects are removed, the corresponding configuration is removed from the Kubernetes plug-in.
Conversely, either the creation of appropriately labeled or annotated ConfigMap
,
ImageStream
, or ImageStreamTag
objects, or the adding of labels after their
initial creation, leads to the creation of a PodTemplate
in the Kubernetes-plugin
configuration.
In the case of the PodTemplate
via ConfigMap
form, changes to the ConfigMap
data for the PodTemplate`will be applied to the `PodTemplate
settings in the
Kubernetes plug-in configuration, and will override any changes made to the
PodTemplate
via the Jenkins UI in the interim between changes to the ConfigMap
.
To use a container image as a Jenkins slave, the image must run the slave agent as an entrypoint. For more details about this, refer to the official Jenkins documentation.
For more details on the sample job included in this image, see this tutorial.
The Jenkins image’s list of pre-installed plug-ins includes the OpenShift Pipeline plug-in, which assists in the creation of CI/CD workflows in Jenkins that run against an OpenShift Container Platform server. A series of build steps, post-build actions, and SCM-style polling are provided, which equate to administrative and operational actions on the OpenShift Container Platform server and the API artifacts hosted there.
In addition to being accessible from the classic "freestyle" form of Jenkins job, the build steps as of version 1.0.14 of the OpenShift Container Platform Pipeline Plug-in are also available to Jenkins Pipeline jobs via the DSL extension points provided by the Jenkins Pipeline Plug-in. The OpenShift Jenkins Pipeline build strategy sample illustrates how to use the OpenShift Pipeline plugin DSL versions of its steps.
The sample Jenkins job that is pre-configured in the Jenkins image utilizes the OpenShift Container Platform pipeline plug-in and serves as an example of how to leverage the plug-in for creating CI/CD flows for OpenShift Container Platform in Jenkins.
See the the plug-in’s README for a detailed description of what is available.
The experiences gained working with users of the OpenShift Pipeline plug-in, coupled with the rapid evolution of both Jenkins and OpenShift, have provided valuable insight into how to integrate OpenShift Container Platform from Jenkins jobs.
As such, the new experimental OpenShift Client Plug-in for Jenkins is now offered as a technical preview and is included in the OpenShift Jenkins images on CentOS (docker.io/openshift/jenkins-1-centos7:latest and docker.io/openshift/jenkins-2-centos7:latest). The plug-in is also available from the Jenkins Update Center. The OpenShift Client plug-in will eventually replace the OpenShift Pipeline plug-in as the tool for OpenShift integration from Jenkins jobs. The OpenShift Client Plug-in provides:
A Fluent-style syntax for use in Jenkins Pipelines.
Use of and exposure to any option available with oc
.
Integration with Jenkins credentials and clusters.
Continued support for classic Jenkins Freestyle jobs.
To facilitate OpenShift Container Platform
Pipeline
build strategy for integration between Jenkins and OpenShift Container Platform, the
OpenShift Sync plug-in
monitors the API server of OpenShift Container Platform for updates to BuildConfigs
and
Builds
that employ the Pipeline strategy and either creates Jenkins Pipeline
projects (when a BuildConfig
is created) or starts jobs in the resulting
projects (when a Build
is started).
The Kubernetes plug-in is used to run Jenkins slaves as pods on your cluster. The auto-configuration of the Kubernetes plug-in is described in Using the Jenkins Kubernetes Plug-in to Run Jobs.