Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add sequence number for replay protection #252

Merged
merged 20 commits into from
Nov 7, 2024

Conversation

hu55a1n1
Copy link
Member

No description provided.

@davekaj
Copy link
Contributor

davekaj commented Oct 21, 2024

Assumed this is still under work by shoaib

@hu55a1n1 hu55a1n1 marked this pull request as ready for review October 21, 2024 22:16
@davekaj davekaj self-requested a review November 7, 2024 16:40
Copy link
Contributor

@davekaj davekaj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks good, but I would like to test it before approving - to make sure we get the right amount of pending_sequenced_requests so that the sequence number is always up to date, even when many are in the queue

@@ -159,6 +177,12 @@ impl QuartzServer {
}
});

tokio::spawn(async move {
while let Some(msg) = self.rx.recv().await {
todo!("{:?}", msg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whats going on here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch. I think this code is redundant now. I originally wrote it to introduce a way for app enclaves to be able to communicate with the core service (and use that to implement the sequence number handling), but this is no longer required.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will push a commit removing this code shortly.

@@ -203,65 +215,3 @@ pub async fn send_tx(
let tx_response = client.broadcast_tx(request).await?;
Ok(tx_response.into_inner())
}

#[cfg(test)]
mod tests {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why did we delete the tests?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I originally wrote these tests just as a way to check that the grpc client code was working with a real testnet. The tests sent a transfers request to a real node, so they were non-deterministic. Also they were #[ignore]d any way.

Comment on lines +392 to +398
if seq_num_diff != pending_sequenced_requests as u64 {
return Err(Status::failed_precondition(&format!(
"seq_num_diff mismatch: num({seq_num_diff}) v/s diff({pending_sequenced_requests})"
)));
}

*seq_num = seq_num_on_chain;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems like something we should test before merging - have you tested it? Is there a way for me to test it?

I wonder if there are timing delays that could lead to this not working. Or what will happen if we send a ton of requests at once?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree. I tested it manually to check that multiple transfers, queries, etc. worked as before. But I haven't tested the replay protection per se because that would require us to get the enclave to run on old input. I think the easiest way to do that would be to add debug logs to print the message request that the wslistener.rs uses to call the transfers enclave run() function and use that to call the run() method directly over gRPC.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if there are timing delays that could lead to this not working. Or what will happen if we send a ton of requests at once?

I think this can be tested manually with the frontend. I tried sending multiple messages in quick succession and it worked for me.

Copy link
Contributor

@davekaj davekaj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed in person, we will make an issue to track the more complex testing, but for now the basic, "good user" testing, is working

@hu55a1n1 hu55a1n1 merged commit bbc8b8a into main Nov 7, 2024
8 checks passed
@hu55a1n1 hu55a1n1 deleted the hu55a1n1/sequence-number branch November 7, 2024 22:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants