-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tentative fork repro test #534
base: main
Are you sure you want to change the base?
Conversation
await alix.conversations.sync() | ||
await syncClientAndGroup(alix) | ||
|
||
// NB => if we don't use Promise.all but a loop, we don't get a fork |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we use Promise.all we typically get a fork, the double negatives are confusing me 😆
So probably a race condition, but I guess where.
What we need to confirm from additional tests is:
- If we have 2 clients of the same inboxId removing members at the same time does it fork?
- If we have 2 clients with 2 different inboxIds does it fork?
If no to both then probably something to sure up on the SDK side
Looks like when running removals in parallel, the removal initiating client is processing their own removal commit as an external envelope which is leading to an openmls
surrounding libxmtp snippet from the failed run:
happy path libxmtp logs when remove in parallel is set to false:
|
Tentative fork repro test
When adding users to a group then removing multiple users at the same time using
Promise.all
, sometimes it appears that the two initial members get forkedI'm not 100% sure this is exactly a fork but it looks like it because, once it's forked, I can try like 5 times, syncing the client & the group, but no messages go through
Logs of a non forked run:
Logs of a forked run: