You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So I did notice this started when we moved our node over, I'm not sure if the fact that our node is technically running on a shorter timeframe than the others since we destroyed our node and rebuilt it. I attributed it to that, so perhaps it is the way the metric is exported.
Just for clarity sake, this node is the exact same machine size, disk size, and networking configuration as the rest of the partner nodes. I mirrored the environment from Pagoda 1 for 1 just to avoid any issues.
I hypothesize that grafana calculates the rate per hour (increase()) by dividing the total count by 60 mins. So since our node is "newer" than the other nodes, there will be significant difference between the total number of iterations from all other nodes to this node. There are months of iterations on the other nodes, and we only have about 27 days worth of iterations.
That is also the reason the other nodes are not exactly aligned with each other, since it took about a week for all of our partners to update.
Description
Such behavior was explored on Testnet and Mainnet. It can lead to failures in all protocols.
The text was updated successfully, but these errors were encountered: