The installation process for Red Hat OpenShift Lightspeed consists of two main tasks: configuring the Large Language Model (LLM) provider and installing the Lightspeed Operator.
You should configure the Large Language Model (LLM) provider you will use prior to installing the OpenShift Lightspeed Operator.
To configure OpenAI as the LLM provider to be used with OpenShift Lightspeed, you must use either your OpenAI API key or OpenAI project name during the configuration process.
The OpenAI service has a feature for projects and service accounts. You may use a service account in a dedicated project so that you can precisely track OpenShift Lightspeed usage.
For more information, see the official OpenAI product documentation.
You need a Microsoft Azure OpenAI service instance. There must be at least one model deployment in Microsoft Azure OpenAI Studio for that instance.
For more information, see the official Microsoft Azure OpenAI product documentation.
You need an IBM Cloud project with access to IBM WatsonX. You will also need your IBM WatsonX API key.
For more information, see the official IBM WatsonX product documentation.
You have deployed OpenShift Container Platform 4.15 or later. The cluster must be connected to the Internet and have telemetry enabled.
You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin
role.
You have access to the OpenShift CLI (oc).
You have successfully configured your Large Language Model (LLM) provider so that OpenShift Lightspeed can communicate with it.
In the OpenShift Container Platform web console, navigate to the Operators → OperatorHub page.
Search for Lightspeed.
Locate the Lightspeed Operator, and click to select it.
When the prompt that discusses the community operator appears, click Continue.
Click Install.
Use the default installation settings presented, and click Install to continue.
Click Operators → Installed Operators to verify that the Lightspeed Operator is installed. Succeeded
should appear in the Status column.