You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for all the work...and apologies for complaining but I am concerned that the errors on Rt predicted by this method are too small. Currently the dashboard claims we know the upper bound on Rt in the WC to better than 10% right now... which is implausible given the known issues with testing.
Fundamentally I am not sure the Bettencourt et al formalism makes sense for a time-dependent Rt as is. The algorithm is using each day's posterior as the next day's prior. That only makes sense if you are using every day to measure a single quantity: R0. If you have different parameters on different days this doesn't make sense, unless you are assuming successive days are correlated. But then you need to explicitly include a covariance matrix which I don't see included. Or else the model should be a hierarchical Bayesian model... Either way, the current formalism is not sufficient in my estimate.
That fundamental problem aside, there are several additional sources of error that should be included in some way because they are really big impact:
(1) the completely unknown testing protocols/efficiency. This obviously has a huge effect. One possible solution would be to estimate Rt using deaths instead which hopefully are better.
(2) the uncertain infectiousness time, Gamma. We still don't know for sure how long people are infectious to better than about 40-70% (or even exactly what that means given that viral loads are a strong function of time). This uncertainty will mean that we don't know the number of new cases per day (lambda). In other words, the algorithm should ideally marginalise over Gamma, not assume a single value given that we don't know it well.
One very simple approach would be to include in quadrature the generous errors on the infectiousness time and how they would affect Rt. I think they will dominate.
Thanks!
b
The text was updated successfully, but these errors were encountered:
I haven't looked at the method used to calculate Rt, but just looking at the graphs for the Western Cape the results look implausible - jumping from an estimate of 0 on 19 Jul to 1.8 on 23 Jul, despite the case numbers not having surged. Possibly it is not robust to data capture artefacts such as results from an area not being updated for a day so that there appear to be very few cases one day then twice as many as usual the next.
Hi,
Thanks for all the work...and apologies for complaining but I am concerned that the errors on Rt predicted by this method are too small. Currently the dashboard claims we know the upper bound on Rt in the WC to better than 10% right now... which is implausible given the known issues with testing.
Fundamentally I am not sure the Bettencourt et al formalism makes sense for a time-dependent Rt as is. The algorithm is using each day's posterior as the next day's prior. That only makes sense if you are using every day to measure a single quantity: R0. If you have different parameters on different days this doesn't make sense, unless you are assuming successive days are correlated. But then you need to explicitly include a covariance matrix which I don't see included. Or else the model should be a hierarchical Bayesian model... Either way, the current formalism is not sufficient in my estimate.
That fundamental problem aside, there are several additional sources of error that should be included in some way because they are really big impact:
(1) the completely unknown testing protocols/efficiency. This obviously has a huge effect. One possible solution would be to estimate Rt using deaths instead which hopefully are better.
(2) the uncertain infectiousness time, Gamma. We still don't know for sure how long people are infectious to better than about 40-70% (or even exactly what that means given that viral loads are a strong function of time). This uncertainty will mean that we don't know the number of new cases per day (lambda). In other words, the algorithm should ideally marginalise over Gamma, not assume a single value given that we don't know it well.
One very simple approach would be to include in quadrature the generous errors on the infectiousness time and how they would affect Rt. I think they will dominate.
Thanks!
b
The text was updated successfully, but these errors were encountered: