We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It can be quite hard at first glance to quantify the level of work between tests (json to db to query to fortune to plaintext)
The wrk results additionally return the Transfer/sec rate e.g.
wrk
Transfer/sec
Running 15s test @ http://10.0.0.1:8080/fortunes 28 threads and 512 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.42ms 1.22ms 26.15ms 95.26% Req/Sec 14.31k 776.36 18.36k 71.10% Latency Distribution 50% 1.19ms 75% 1.45ms 90% 1.76ms 99% 7.87ms 6014809 requests in 15.10s, 7.62GB read Requests/sec: 398334.02 Transfer/sec: 517.02MB
If this was added as a column to the dashboard it would be easier to quantify at a glance the difference between db, query and cached for example
The text was updated successfully, but these errors were encountered:
We don't currently capture this but I think we could. Going to move the issue to the new toolset.
Sorry, something went wrong.
Another option is to have this data, which is available via the raw logs for every benchmark run, appear in a div onHover on the results website.
onHover
No branches or pull requests
It can be quite hard at first glance to quantify the level of work between tests (json to db to query to fortune to plaintext)
The
wrk
results additionally return theTransfer/sec
rate e.g.If this was added as a column to the dashboard it would be easier to quantify at a glance the difference between db, query and cached for example
The text was updated successfully, but these errors were encountered: