You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@Olllom float32 has round-off errors at e-8, i.e. is precise up to e-7. However, the tests often contain multiple calculations, so the error propagates to a degree, which is difficult to estimate. We could simply try out the actual accuracy of the test in a version we trust to be accurate and use this as the error margin.
Btw., this should also be true for double precision. How did you estimate the accuracy there?
This issue was mostly meant as a reminder that we're currently skipping these tests. With the relatively naive scheme that we use to represent the distribution functions, single precision is simply not good enough for most flows. Errors accumulate and render the whole simulation inaccurate. Some other LB packages implement tricks that allow to use 32-bit or even 16-bit precision, see for example https://link.aps.org/doi/10.1103/PhysRevE.106.015308
Tolerances are too loose for CUDA and float32.
One solution would be that the
dtype_device
fixture also returns a "base tolerance" that is suitable for the given compute context.The text was updated successfully, but these errors were encountered: