Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Architecture brainstorming #439

Open
aureamunoz opened this issue Dec 5, 2023 · 1 comment
Open

Architecture brainstorming #439

aureamunoz opened this issue Dec 5, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@aureamunoz
Copy link
Contributor

aureamunoz commented Dec 5, 2023

This ticket collects ideas and questions that we have exchanged during a few calls.
We have been discussing about the new architecture, we would like to convert Primaza in an operator able to perform the binding. So we are wondering about a few questions as follows:

- What is the target architecture?

Scenario A : Primaza creates Claim + Operator watches Claims and perform binding.
An application creating a Claim CR + an operator reacting to that Claim and creating directly the needed resources in the cluster? (In this scenario the operator wouldn't use the Primaza API but would create resources directly). Primaza would keep logic to manage the Claim lifecycle.

How could Primaza discover available services?
One of the advantages of this scenario would be the RBAC management.

Scenario B: Operator calls Primaza APIs.

@aureamunoz
Copy link
Contributor Author

More thoughts:

TL&DR => maybe what I 'm posting should be included in a ticket ...

Primaza should only take care about searching/provisioning(optional) and getting the URL to access a service and the credentials
Primaza should handle such a requirement using Claim CR
The URL and credentials should be returned (= part of the version 1 of primaza) as Claim CR => status
Note: This is up to the issuer or claim (user or system) to do what it is needed with the information returned. So, Primaza should not be the system/actor updating a Deployment OR Knative Service etc otherwise it iwill be needed that we know, part of the claim, which resource should be updated and by consequence we will suffer from the same issue as SBO ;-)
Question: How then should we manage the claim + response ;-) It depends about the loop model. Here is a proposition

In short, here is what I'm thinking that we should do around QShift to support inner and outerloop.
Until now, we are focused around inner loop but we could do more as I will detail hereafter

  1. Innerloop

Developer using Quarkus CLI (or extension) can provision a KubeVIrtVM running podman on ocp
That will help them to compile locally and to use the podman daemon running within the VM to build their image AND also to test their component if they use Testcontainer
Can we offer more ... for sure ;-)

As we own/control what we package within the Fedora VM (= what I call Quarkus Dev VM), we could install :

Maven + JDK + tools certified by Red Hat
Run (when needed) quarkus server remote for debugging purposes or lifecoding (=> websocket)
Configure the quarkus kube client with Token or CA/KEy (= kubeconfig) to talk with the cluster
Support some additional scenario:
HTTP/TLS => as we can generate using ssh-keygen a selfsigned certificate + Keys or issue a Certificate/ISsuer to request to Cert Manager to generate the secret and mount it within the quarkus container running inside the VM
Bind the quarkus application to a backend application (= Primaza + Vault)
2) Outerloop
To support really the development of the quarkus/runtime applications on ocp, using GitOps (= ArgoCD) will not be enough as some resources should be generated, enhanced, etc. This is is why we need 2 specific Pipelines which can be triggered according to different modes but able to do:
A) From source -> build -> scan -> sbom -> image pushed (= aka what RHTAP is doing using their Tekton pipeline) = "Build and push pipeline"
B) From Application + Component CR definition => the Application runtime controller will according to the CR definition (= runtime of type java + version + quarkus + type: knative, standard, istio + service (=> route or ingress) + capabilities: backend, TLS, etc) select the corresponding Tekton Pipeline and set params.
The tekton pipeline will then execute the following scenario which should be tailored according to the runtime, etc:

Generate the resources: Deployment or Knative Service or ...
Mount volumes about Secrets + ConfigMap
Request Certificate + Keys (optional)
Claim to access a service and get the credentials (optional) => this is what the primaza POC do
Enhance generated resources
Package them: helm or kustomize or static files (depending what the the customer prefer to use)
Generate the ArgocD Application CD
Push to Git
"= Packaging pipeline"
C) From package -> deploy on cluster A, B, C (= GitOps deployment using ArgoCD)

@aureamunoz aureamunoz added the enhancement New feature or request label Dec 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant