Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DoS limits #14

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

DoS limits #14

wants to merge 1 commit into from

Conversation

chjj
Copy link
Member

@chjj chjj commented Apr 11, 2017

After running some numbers, I redefined transaction "cost". What's specified in this PR allows a max extension block size of 6mb and an average case ext. block size limit of 2mb. I think this is fairly reasonable, but I'd like feedback. We could always start lower (1mb) and do a soft-fork upgrade to 2mb later.

As for code, the dev branch has also been updated to reflect these changes.

@chjj chjj mentioned this pull request Apr 11, 2017
@jujumax
Copy link

jujumax commented Apr 12, 2017

What block validation target times we are looking for avg/max?

@ftrader
Copy link

ftrader commented Apr 15, 2017

A costly mistake was made in Bitcoin before, to encode a fixed block size limit into the protocol during a time of crisis.

In the interest of avoiding further mistakes, let's discuss this.

  1. Could you please provide the numbers, how they were obtained and how you ran them to arrive at these limits (source data and methodology) ?

  2. Could you provide rationale - you clearly think a constraint below the technical limitations of whatever an implementation can provide is needed, but why (reasoning) ?

Thanks.

@nomnombtc
Copy link

It would be nice if the extblk size limit maybe could be controlled by a vote mechanism, kinda like in BIP100

@chjj
Copy link
Member Author

chjj commented May 9, 2017

My original calculations were off a bit. Here is the data for the last 1000 ~1mb mainchain blocks when applying the "cost" algorithm to them: https://gist.github.com/chjj/af70a21b539746efbb5a6f724a3715af

This considers regular P2PKH to be WP2PKH and P2SH to be P2WSH. It also considers that the input script's size should be calculated as if it were a witness vector.

Average:

txs: 2132
inputs: 4629
outputs: 5222
input-cost: 4904701
output-cost: 1043437
cost: 5948139
sigops-cost: 48104

Median:

txs: 2186
inputs: 4569
outputs: 5197
input-cost: 4929624
output-cost: 1041368
cost: 5931513
sigops-cost: 48120

If we simply want a straight capacity increase, this means that an average case block could be limited to roughly 2mb with something like:

MAX_EXTENSION_COST: 12000000
MAX_EXTENSION_SIGOPS_COST: 100000

Whether to treat the output cost more harshly is another question that I'm sure will come up (and has). I don't have personal feelings either way. As far as technicals go, the idea of keep less pressure off leveldb can be appealing. Going that route would simply turn cost into a more dynamic/smarter version of weight though.

I'm currently working on some benchmarks for worst and average case blocks with these updated numbers. From my initial findings, an average case 1.7mb extension block at the current cost limit (mentioned above), verification time in bcoin is about 900-1000ms on an i7. This didn't properly account for JIT warmup though, so that number may be a lot higher than it should be.

Truly testing mainnet validation time would require creating a chain database which stores 45m+ utxos (the fact is, every utxo/utx adds a key to the database, which potentially causes another recursive split of nodes up the branch, maybe increase the depth of the leaves, etc. -- at least in say a B+ tree. Not sure how leveldb's LSM tree works as well as I do more traditional database data structures. But the point is, the amount of keys increases key lookup times. It will take a lot of extra work to accurately estimate verification times).

Personally, I think a lot of that work is partly meaningless. As far as how this works in practice, there's no way to tell without introducing it into a system with real economic actors involved. No matter how many benchmarks we run or simulations we create, it's not going to replicate bitcoin as it exists. For this reason, I think it's important to avoid bikeshedding too much on this issue. We can try to make it as safe as possible, but putting it out there is the always real test for anything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants