$ oc get svc/docker-registry --template='{{.spec.sessionAffinity}}'
The local docker-registry deployment takes on additional load. By default, it now caches content from registry.redhat.io. The images from registry.redhat.io for STI builds are now stored in the local registry. Attempts to pull them result in pulls from the local docker-registry. As a result, there are circumstances where extreme numbers of concurrent builds can result in timeouts for the pulls and the build can possibly fail. To alleviate the issue, scale the docker-registry deployment to more than one replica. Check for timeouts in the builder pod’s logs.
When using a scaled registry with a shared NFS volume, you may see one of the following errors during the push of an image:
digest invalid: provided digest did not match uploaded content
blob upload unknown
blob upload invalid
These errors are returned by an internal registry service when Docker attempts to push the image. Its cause originates in the synchronization of file attributes across nodes. Factors such as NFS client side caching, network latency, and layer size can all contribute to potential errors that might occur when pushing an image using the default round-robin load balancing configuration.
You can perform the following steps to minimize the probability of such a failure:
Ensure that the sessionAffinity
of your docker-registry service is set
to ClientIP
:
$ oc get svc/docker-registry --template='{{.spec.sessionAffinity}}'
This should return ClientIP
, which is the default in recent OpenShift Container Platform
versions. If not, change it:
$ oc patch svc/docker-registry -p '{"spec":{"sessionAffinity": "ClientIP"}}'
Ensure that the NFS export line of your registry volume on your NFS server has
the no_wdelay
options listed. The no_wdelay
option prevents the server from
delaying writes, which greatly improves read-after-write consistency, a
requirement of the registry.
Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components. |
This error occurs when the pulled image is pushed to an image stream different from the one it is being pulled from. This is caused by re-tagging a built image into an arbitrary image stream:
$ oc tag srcimagestream:latest anyproject/pullimagestream:latest
And subsequently pulling from it, using an image reference such as:
internal.registry.url:5000/anyproject/pullimagestream:latest
During a manual Docker pull, this will produce a similar error:
Error: image anyproject/pullimagestream:latest not found
To prevent this, avoid the tagging of internally managed images completely, or re-push the built image to the desired namespace manually.
There are problems reported happening when the registry runs on S3 storage back-end. Pushing to a container image registry occasionally fails with the following error:
Received unexpected HTTP status: 500 Internal Server Error
To debug this, you need to view the registry logs. In there, look for similar error messages occurring at the time of the failed push:
time="2016-03-30T15:01:21.22287816-04:00" level=error msg="unknown error completing upload: driver.Error{DriverName:\"s3\", Enclosed:(*url.Error)(0xc20901cea0)}" http.request.method=PUT ... time="2016-03-30T15:01:21.493067808-04:00" level=error msg="response completed with error" err.code=UNKNOWN err.detail="s3: Put https://s3.amazonaws.com/oso-tsi-docker/registry/docker/registry/v2/blobs/sha256/ab/abe5af443833d60cf672e2ac57589410dddec060ed725d3e676f1865af63d2e2/data: EOF" err.message="unknown error" http.request.method=PUT ... time="2016-04-02T07:01:46.056520049-04:00" level=error msg="error putting into main store: s3: The request signature we calculated does not match the signature you provided. Check your key and signing method." http.request.method=PUT atest
If you see such errors, contact your Amazon S3 support. There may be a problem in your region or with your particular bucket.
If you encounter the following error when pruning images:
BLOB sha256:49638d540b2b62f3b01c388e9d8134c55493b1fa659ed84e97cb59b87a6b8e6c error deleting blob
And your registry log contains the following information:
error deleting blob \"sha256:49638d540b2b62f3b01c388e9d8134c55493b1fa659ed84e97cb59b87a6b8e6c\": operation unsupported
It means that your custom configuration file lacks mandatory entries in the
storage section, namely
storage:delete:enabled
set to true. Add them, re-deploy the registry, and
repeat your image pruning operation.