Welcome to the official Red Hat OpenShift Service on AWS documentation, where you can find information to help you learn about Red Hat OpenShift Service on AWS and start exploring its features.

To navigate the Red Hat OpenShift Service on AWS (ROSA) documentation, use the left navigation bar.

For documentation that is not ROSA-specific, see the OpenShift Container Platform documentation.

Developer activities

Ultimately, Red Hat OpenShift Service on AWS is a platform for developing and deploying containerized applications. As an application developer, Red Hat OpenShift Service on AWS and OpenShift Container Platform documentation helps you:

  • Understand Red Hat OpenShift Service on AWS development: Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators.

  • Work with projects: Create projects from the web console or CLI to organize and share the software you develop.

  • Work with applications: Use the Developer perspective in the Red Hat OpenShift Service on AWS web console to easily create and deploy applications. Use the Topology view to visually interact with your applications, monitor status, connect and group components, and modify your code base.

  • Use the developer CLI tool (odo): The odo CLI tool lets developers create single or multi-component applications easily and automates deployment, build, and service route configurations. It abstracts complex Kubernetes and Red Hat OpenShift Service on AWS concepts, allowing developers to focus on developing their applications.

  • Create CI/CD Pipelines: Pipelines are serverless, cloud-native, continuous integration and continuous deployment systems that run in isolated containers. They use standard Tekton custom resources to automate deployments and are designed for decentralized teams that work on microservices-based architecture.

  • Understand Operators: Operators are the preferred method for creating on-cluster applications for Red Hat OpenShift Service on AWS . Learn about the Operator Framework and how to deploy applications using installed Operators into your projects.

  • Understand image builds: Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials (from places like Git repositories, local binary inputs, and external artifacts). Then, follow examples of build types from basic builds to advanced builds.

  • Create container images: A container image is the most basic building block in Red Hat OpenShift Service on AWS (and Kubernetes) applications. Defining image streams lets you gather multiple versions of an image in one place as you continue its development. S2I containers let you insert your source code into a base container that is set up to run code of a particular type (such as Ruby, Node.js, or Python).

  • Create deployments: Use Deployment and DeploymentConfig objects to exert fine-grained management over applications. Use the Workloads page or oc CLI to manage deployments. Learn rolling, recreate, and custom deployment strategies.

  • Create templates: Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built.

Cluster administrator activities

While cluster maintenance and host configuration is performed by the Red Hat Site Reliability Engineering (SRE) team, other ongoing tasks on your Red Hat OpenShift Service on AWS cluster can be performed by Red Hat OpenShift Service on AWS cluster administrators. As a Red Hat OpenShift Service on AWS cluster administrator, the documentation helps you:

  • Manage Dedicated Administrators: Grant or revoke permissions to dedicated admin users.

  • Work with Logging: Learn about Red Hat OpenShift Service on AWS Logging and configure the logging add-on services.

  • Monitor clusters: Learn to use the Web UI to access monitoring dashboards.

  • Manage nodes: Learn to manage nodes, including configuring machine pools and autoscaling.