-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EFM Recovery Service Event and Transaction #440
EFM Recovery Service Event and Transaction #440
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copying over the main comment from the previous review: #420 (comment)
The second conditional case of the recover_epoch
transaction (when unsafeAllowOverwrite
is false) doesn't use the recoveryEpochCounter
value at all. But if we go down that code path and FlowEpoch.currentEpochCounter != recoveryEpochCounter
, we know the recovery process will fail.
So I think we should use recoveryEpochCounter
in the second codepath as well. We can explicitly check that FlowEpoch.currentEpochCounter == recoveryEpochCounter
, for example as a precondition, and panic if this doesn't hold.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is coming along nicely!
My main suggestion in this review is to expand the test coverage to cover more edge cases (suggestions enumerated here). The existing tests are quite verbose, so I think it would be worthwhile to
invest time in factoring out some of the common test logic when adding test cases. After we get Josh's input on the implementation changes, I'd be OK with implementing additional test coverage in a separate PR. If you'd like to do that, let me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the nice work. Appreciate the multitude of smaller refectorings, where you have moved auxiliary code into little service methods -- that certainly improves readability of the code.
I have added various suggestions for extending the documentation. However, given my very limited knowledge of cadence and the epoch smart contracts, I don't feel sufficiently confident in my abilities to spot potential problems/errors to approve this PR.
significant challenge [update: its not a big risk; see Jordan's comment below] that I noticed:
- the
FlowEpoch
smart contract offers two entry points for recovery:recoverNewEpoch
which requires that the counter for the recovery epoch matches the smart contract's current epochrecoverCurrentEpoch
enforces that counter for the recovery epoch matches is one bigger than the smart contract's current epoch
- I think this places strong limitations on the scenarios we can successfully recover from (specifically the time frame in which a recovery must be successful). Lets unravel this a bit:
- so initially we assume that the Protocol State and the Epoch Smart Contract are on the happy path: both counters are (largely) in sync
- then there is a problem and the Protocol State goes into EFM. That means for the running network, where the protocol state is the source of truth that determines operation, the network remains on the epoch counter
N
- However, while the protocol state stays on epoch
N
(extending it until successful recovery), the smart contract can continue to progress through its speculative epochs. - I think it is very likely that failures will occur relatively close to the desired epoch switchover, because the Epoch Setup phase only ends a few hours before the target transition and that is where problems typically occur. Lets say its 3 hours before the target transition and the protocol state goes into EFM and stays at epoch
N
. - The protocol state continues its work and enters epoch
N+1
. Everyone is stressed because the network is in EFM, some people might be OOO, the engineers are doing the best they can. The engineers trying to recover the epoch lifecycle know that they have to specify the next epoch: They query the smart contract, which tells them the system is currently in epochN+1
. So the engineers specify epochN+2
and callrecoverNewEpoch
. The smart contract is happy, and emits a recovery event for epochN+2
and enters epochN+2
... but the protocol state rejects the recovery because it is still in epochN
and expects epochN+1
to be specified. And then we are screwed: the protocol state must receive a recovery epochN+1
but the smart contract is already atN+2
, it only accepts recovery data for epochs with counter ≥N+2
! ☠️ - different scenario: due to typos, stress and unfamiliarity with the recovery process the first two calls to
recoverNewEpoch
emit an event (each increasing the counter) which are both rejected. We end up in a similar scenario: the smart contract's epoch counter has already progressed beyond the expected value for the dynamic protocol state. - Other scenario: too many partner nodes are offline and we would like to get them back online before attempting an epoch recovery ... reaching out and helping the partners might take some time. The network is running fine (just saying in its current EFM epoch). We decide to leave the system in EFM for more than a week (presumably nothing bad will happen), but forget to call
epochReset
... so after a week the smart contract is now in epochN+2
while the Protocol State s still inN
.
Essentially our current smart contract implementation makes the very limiting assumption that the Protocol State's Epoch counter can be at most one behind the smart contract. Otherwise, we have no means for recovery.
Lets keep in mind that we are implementing a disaster prevention mechanism here: its very rare so no one really has much experience with it, occurrences of disasters cannot be planned for, people are stressed and engineers with the deep background might be unavailable, the first EFM might happen in a year, when we have already forgotten some of the critical but subtle limitations.
Hence, I am strongly of the opinion that this process should be as fault-proof as possible:
- multiple/many failed recovery attempts should be possible
- the system should provide ample time for successful recovery (certainly more than a week)
- it should be nearly impossible to for failed recovery attempts to break anything (no matter how broken the inputs are)
I think we are pretty close but have two main hurdles:
-
We should prepare for the scenario where the protocol state is in EFM epoch
N
but the smart contract believes the system is in epochN+k
for any integerk
. That would be something to solve as part of this PR (or a subsequent smart contract PR). -
ideally, the fallback state machine guarantees that a successful RecoverEpoch event always is a valid epoch configuration. The recovery parameters might be manually set, so the risk of human error should be mitigated. What is missing is checking:
- that the cluster QCs are valid QCs for each collector cluster
- DKG committee has sufficient intersection with the consensus committee to allow for live consensus
This is out of scope of this PR.
As usual, we should be weighing how much engineering time that actually would take to implement. Nevertheless, it deeply worries me that we have a bunch of subtle footguns in our implementation, in that we might irreparably break mainnet in case we violate one of the several subtle constrains (either by human error, or even worse by not acting for only a week).
Also cc @durkmurder @jordanschalm for visibility, comments and thoughts.
Responding to Alex's comment here 👇
You outlined a few scenarios in your comment, but each of them relies on the smart contract continuing to transition through speculative epochs without the Protocol State following suit. In practice the smart contract transition process provides a strong guarantee that Smart Contract Transition Logic
Outstanding ProblemsInvoking
|
Co-authored-by: Alexander Hentschel <[email protected]> Co-authored-by: Jordan Schalm <[email protected]>
Co-authored-by: Jordan Schalm <[email protected]> Co-authored-by: Alexander Hentschel <[email protected]>
- replace numViewsInStakingAuction with stakingEndView - startView - don't accept numViewsInDKGPhase as a parameter read it from configurable epoch metadata
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice expansion of the test coverage -- thank you.
Summary of feedback:
- If I'm understanding correctly, we don't have a test case that executes recovery during the staking phase -- I think we should add this before merging
- I added some questions about the last test case (we're doing two recoveries back-to-back and I'm not sure why)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good! Just have some questions and small comments
Co-authored-by: Joshua Hannan <[email protected]>
remove dupe code fragment
…low/flow-core-contracts into khalil/5639-efm-recovery-transaction
This PR updates the FlowEpoch smart contract to support recovering the network while in Epoch Fallback Mode. It adds a new service event EpochRecover which contains the metadata for the recovery epoch. This metadata is generated out of band using the bootstrap utility util epoch efm-recover-tx-args onflow/flow-go#5576 and submitted to the contract with the recovery_epoch.cdc transaction. The FlowEpoch contract will end the current epoch, start the recovery epoch and store the metadata for the recovery epoch in storage. This metadata will then be emitted to the network during the next heartbeat interval.
Reopening original PR: #420