Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent data about inbound connectors because of multiple replicas #3019

Open
markfarkas-camunda opened this issue Aug 2, 2024 · 4 comments
Assignees
Labels
kind:bug Something isn't working

Comments

@markfarkas-camunda
Copy link
Contributor

Describe the Bug

Since we have 2 replicas, fetching data for inbound connectors become inconsistent. Sometimes we fetch data from one replica, sometimes from the other, causing strange changes in the Webhook/Subscription tab in web-modeler.

Live example: https://modeler.ultrawombat.com/diagrams/b972d768-7208-4b48-a9e1-69bc819ba99a--kafka-incorrect-activation-condition?v=760,386,1

Modeler._.Kafka.incorrect.activation.condition.-.Google.Chrome.2024-08-02.11-37-58.mp4

Steps to Reproduce

  1. Create any inbound connector
  2. Do something to have some logs
  3. Open the Subscription/Webhook tab in web-modeler and observe that sometimes we have duplicated data

Expected Behavior

We should not have the duplicated data

@markfarkas-camunda markfarkas-camunda added the kind:bug Something isn't working label Aug 2, 2024
@sbuettner
Copy link
Contributor

@sbuettner
Copy link
Contributor

@crobbins215 to sync @theburi regarding this and hot it fits into https://github.com/camunda/product-hub/issues/1063

@crobbins215 crobbins215 removed their assignment Aug 23, 2024
@markfarkas-camunda
Copy link
Contributor Author

markfarkas-camunda commented Sep 30, 2024

While investigating I identified an other issue/improvement opportunity: we do not distinguish between logs coming from cluster1 and cluster2. We always merge all the incoming logs and show them together. It would be nice to show the logs separated by clusters. Separation by nodes on the other hand might not make much sense if we somehow identify the log entries (e.g. based on timestamp) and only add new ones but never delete any. This would ofc also require a guarantee that we always have info from all nodes. Regardless, I've reached out to console and sre team about this to have the whole picture.

@markfarkas-camunda markfarkas-camunda self-assigned this Sep 30, 2024
@markfarkas-camunda
Copy link
Contributor Author

This issue was also referred to here: https://github.com/camunda/product-hub/issues/2250#issuecomment-2134580579

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind:bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants