-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add sequence number for replay protection #252
Conversation
Assumed this is still under work by shoaib |
1778fbd
to
1e6ffa7
Compare
1e6ffa7
to
973974c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks good, but I would like to test it before approving - to make sure we get the right amount of pending_sequenced_requests
so that the sequence number is always up to date, even when many are in the queue
crates/enclave/core/src/server.rs
Outdated
@@ -159,6 +177,12 @@ impl QuartzServer { | |||
} | |||
}); | |||
|
|||
tokio::spawn(async move { | |||
while let Some(msg) = self.rx.recv().await { | |||
todo!("{:?}", msg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whats going on here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch. I think this code is redundant now. I originally wrote it to introduce a way for app enclaves to be able to communicate with the core service (and use that to implement the sequence number handling), but this is no longer required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will push a commit removing this code shortly.
@@ -203,65 +215,3 @@ pub async fn send_tx( | |||
let tx_response = client.broadcast_tx(request).await?; | |||
Ok(tx_response.into_inner()) | |||
} | |||
|
|||
#[cfg(test)] | |||
mod tests { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why did we delete the tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I originally wrote these tests just as a way to check that the grpc client code was working with a real testnet. The tests sent a transfers request to a real node, so they were non-deterministic. Also they were #[ignore]
d any way.
if seq_num_diff != pending_sequenced_requests as u64 { | ||
return Err(Status::failed_precondition(&format!( | ||
"seq_num_diff mismatch: num({seq_num_diff}) v/s diff({pending_sequenced_requests})" | ||
))); | ||
} | ||
|
||
*seq_num = seq_num_on_chain; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems like something we should test before merging - have you tested it? Is there a way for me to test it?
I wonder if there are timing delays that could lead to this not working. Or what will happen if we send a ton of requests at once?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree. I tested it manually to check that multiple transfers, queries, etc. worked as before. But I haven't tested the replay protection per se because that would require us to get the enclave to run on old input. I think the easiest way to do that would be to add debug logs to print the message request that the wslistener.rs uses to call the transfers enclave run()
function and use that to call the run()
method directly over gRPC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if there are timing delays that could lead to this not working. Or what will happen if we send a ton of requests at once?
I think this can be tested manually with the frontend. I tried sending multiple messages in quick succession and it worked for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed in person, we will make an issue to track the more complex testing, but for now the basic, "good user" testing, is working
No description provided.