Based on OpenStack K8S operators from the "main" branch of the OpenStack Operator repo on Dec 19th, 2023
This is a collection of CR templates that represent a validated Red Hat OpenStack Services on OpenShift deployment that has the following characteristics:
- 3 master/worker combo-node OpenShift cluster
- 3-replica Galera database
- RabbitMQ
- OVN networking
- Network isolation over a single NIC
- 3 compute nodes
- CephHCI installed on compute nodes and used by various OSP services
- Cinder Volume using RBD for backend
- Cinder Backup using RBD for backend
- Glance using RBD for backend
- Nova using RBD for ephemeral storage
- Manila using CephFS for backend
-
These CRs are validated for the overall functionality of the OSP cloud deployed, but they nonetheless require customization for the particular environment in which they are utilized. In this sense they are templates meant to be consumed and tweaked to fit the specific constraints of the hardware available.
-
The CRs are applied against an OpenShift cluster in stages. That is, there is an ordering in which each grouping of CRs is fed to the cluster. It is not a case of simply taking all CRs from all stages and applying them all at once.
-
In stages 1 and 2 kustomize is used to genereate the control plane CRs dynamically. The
control-plane/nncp/values.yaml
file(s) must be updated to fit your environment. kustomize version 5 or newer required. -
In stages 3 and 4 kustomize is used to generate the dataplane CRs dynamically. The
edpm-pre-ceph/values.yaml
,values.yaml
andservice-values.yaml
files must be updated to fit your environment. kustomize version 5 or newer required. -
Between stages 3 and 4, it is assumed that the user installs Ceph on the 3 OSP compute nodes. OpenStack K8S CRDs do not provide a way to install Ceph via any sort of combination of CRs.
All stages must be executed in the order listed below. Everything is required unless otherwise indicated.