Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New column for dashboard #6

Open
benaadams opened this issue Jan 5, 2021 · 2 comments
Open

New column for dashboard #6

benaadams opened this issue Jan 5, 2021 · 2 comments

Comments

@benaadams
Copy link

It can be quite hard at first glance to quantify the level of work between tests (json to db to query to fortune to plaintext)

The wrk results additionally return the Transfer/sec rate e.g.

Running 15s test @ http://10.0.0.1:8080/fortunes
  28 threads and 512 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.42ms    1.22ms  26.15ms   95.26%
    Req/Sec    14.31k   776.36    18.36k    71.10%
  Latency Distribution
     50%    1.19ms
     75%    1.45ms
     90%    1.76ms
     99%    7.87ms
  6014809 requests in 15.10s, 7.62GB read
Requests/sec: 398334.02
Transfer/sec:    517.02MB

If this was added as a column to the dashboard it would be easier to quantify at a glance the difference between db, query and cached for example

@NateBrady23
Copy link
Member

We don't currently capture this but I think we could. Going to move the issue to the new toolset.

@NateBrady23 NateBrady23 transferred this issue from TechEmpower/FrameworkBenchmarks Jan 5, 2021
@msmith-techempower
Copy link
Member

Another option is to have this data, which is available via the raw logs for every benchmark run, appear in a div onHover on the results website.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants