-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Multi agent #5844
feat: Multi agent #5844
Conversation
0d0d34c
to
fc3c3af
Compare
4c4d7a7
to
6e13778
Compare
|
||
// Check Pro/Enterprise subscription | ||
var subscriptionChecker checktcl.SubscriptionChecker | ||
if mode == common.ModeAgent { | ||
subscriptionChecker, err = checktcl.NewSubscriptionChecker(ctx, proContext, grpcClient, grpcConn) | ||
exitOnError("Failed creating subscription checker", err) | ||
|
||
// Load environment/org details based on token grpc call | ||
environment, err := controlplanetcl.GetEnvironment(ctx, proContext, grpcClient, grpcConn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
executor need to follow loading it from this one - not from inlined env variables
func (ag *Agent) executeCommand(ctx context.Context, cmd *cloud.ExecuteRequest) *cloud.ExecuteResponse { | ||
switch { | ||
case cmd.Url == healthcheckCommand: | ||
case cmd.Url == HealthcheckCommand || cmd.Command == string(cloud.HealthcheckCommand): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not needed anymore
@@ -0,0 +1,47 @@ | |||
package handlers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it be better to have some errors wrapped
e.g.
errors.Wrap(err, errors.NotFound)
and next the mapper which will check what kinds of error this is and set valid response
@@ -542,11 +532,55 @@ func (e *executor) Execute(ctx context.Context, workflow testworkflowsv1.TestWor | |||
log.DefaultLogger.Errorw("failed to encode tags", "id", id, "error", err) | |||
} | |||
|
|||
// Get (for centralized mode) TW execution or create it | |||
if request.Id != "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to de-concrete executions logic from here to split fully executor engine from data implementation
execution.Tags = tags | ||
|
||
// Insert or save execution | ||
if request.Id != "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exctract it outside?
executor could be in form
func(ExecutionRequest, Workflow) Result
|
||
// TODO - valid error handling | ||
|
||
func NewExecuteTestWorkflowHandler( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for sure not needed:
- There is already option to execute Test Workflow via generic gRPC command we have (that calls API)
- For future Execution Worker, the command will look differently - as the Execution Worker needs to be stateless
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've planned here to fully decouple from API server - as it's additional unneded thing. So it's quite needed to not spawn this APIServer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These was the attempt to decouple it from the API server totally - can be refactored/reordered if we decide about API later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've planned here to fully decouple from API server - as it's additional unneded thing. So it's quite needed to not spawn this APIServer
But when it will be decoupled from the API Server, the signature will also differ, so (A) this function is for decoupling, yet (B) it needs to be deleted (and replaced with a new handler) after decoupling, as it will look differently 🙂 So probably better to not pollute the new gRPC schema with obsolete functions
Considering that in this PR the runner IDs are not dynamic (but needs to be pre-created), either:
On the other hand, the best solution is to avoid runner IDs at all, and have runner tags instead (like K8S
|
But you need some kind of ID here (whatever we name it) to e.g. schedule against it - I'm not sure if we really should follow Kubernetes naming here at all. These points are both valid for IDs and tags like in affinity. I agree about separate keys - if we want to decouple runners from environments - it'll be next thing to do for sure. |
|
Pull request description
Checklist (choose whats happened)
Breaking changes
Changes
Fixes