You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to ensure that Vello reliably renders as expected, even in the face of potentially significant refactors (e.g. #574), and with unusual scenes (e.g. #542). Additionally, as our pipeline changes (e.g. #607), we need to ensure that the CPU shaders keep up-to-date (especially if and/or when we get #485). This requires automated testing.
Some groundwork was laid in #439, which got us running on an emulated Mesa GPU on Linux, and physical M1 GPUs on macOS. However, the tests added there are extremely simple, doing property based testing on a small handful of scenes. We also want to do snapshot testing. This would involve snapshots of "hand-written" scenes, such as to emulate a masonry scene or to include tricky cases, and a number of SVG scenes. The canonical results for these testst would be stored in this repository 1. We discovered in linebender/xilem#233 that our rendering isn't completely stable cross-platform (presumably due to fast math), so we need to compare the images using something like nv-flip, as done by wgpu.
There are some additional kinds of tests which can be performed. These are longer-term plans, and should have their own issues if and/or when we decide to do them. These include:
It would also be possible to use diffing against previous CI artefacts. This is what bevy is exploring in Compare screenshots with main on PRs bevyengine/bevy#13248. This would allow using significantly more tests, but we would need to set up infrastructure for comparing and approving these images (as Bevy's infrastructure is closed and proprietary).
Comparing the CPU and GPU pipelines, removing the need to store an on-disk snapshot
Fixes#608
This does not add a wide gamut of tests - I imagine that the
`test_scenes` will provide some good fodder here.
There are some quality-of-life features included:
- You can use `VELLO_TEST_CREATE=all` to create new tests. It is
recommended to set this on your local machine (and it won't be set on
CI)
- You can use `VELLO_TEST_UPDATE=all` to update with new contents if the
test fails. (Note that if there's a minor change which passes the test,
this won't update)
These variables can also filter to specific tests if needed.
The saved snapshots are always created using the GPU renderer, but it is
set up to be easy to use the CPU renderer for tests as well.
Using `cargo nextest` for running tests is recommended; indeed, it seems
to be now required, as the tests segfault without it
We want to ensure that Vello reliably renders as expected, even in the face of potentially significant refactors (e.g. #574), and with unusual scenes (e.g. #542). Additionally, as our pipeline changes (e.g. #607), we need to ensure that the CPU shaders keep up-to-date (especially if and/or when we get #485). This requires automated testing.
Some groundwork was laid in #439, which got us running on an emulated Mesa GPU on Linux, and physical M1 GPUs on macOS. However, the tests added there are extremely simple, doing property based testing on a small handful of scenes. We also want to do snapshot testing. This would involve snapshots of "hand-written" scenes, such as to emulate a masonry scene or to include tricky cases, and a number of SVG scenes. The canonical results for these testst would be stored in this repository 1. We discovered in linebender/xilem#233 that our rendering isn't completely stable cross-platform (presumably due to fast math), so we need to compare the images using something like nv-flip, as done by wgpu.
There are some additional kinds of tests which can be performed. These are longer-term plans, and should have their own issues if and/or when we decide to do them. These include:
Footnotes
We might be better off doing something like piet-snapshots or
git lfs
. We get 1GiB of bandwidth a month of lfs, which might be a problem for CI. piet-snapshots is less likely to run into this, because we get free bandwidth. See e.g. https://estebangarcia.io/how-we-saved-on-github-lfs-bandwidth/ ↩The text was updated successfully, but these errors were encountered: