Convert a mesh to an signed distance field for the VFX Graph in realtime.
See the MeshToSDF/Demo.unity
scene to see how to use.
- Drag the MeshToSDF prefab into your scene.
- Either set a mesh for the
Mesh
field or set aSkinnedMeshRenderer
for theSkinned Mesh Renderer
field - Enter play-mode and set the offset and scale such that the mesh is placed within the SDF where you want it to be. Copy these values into edit-mode
- Outputs:
- VFX graph output - set the
Vfx Output
field to a VFX graph and theVfx Property
to an exposed Texture3D parameter of the VFX graph - Material output - same as vfx graph output, but with a material. There's a
Slice Texture 3D
material in theEditor
folder that can be used to debug the SDF. Put it on a plane and put it in to the Material output property to see a slice of the SDF. - Script output - the SDF is available on the
outputRenderTexture
field of the component. The distance is stored in a RGBAFloat texture, in the RGB channels. Note that if you update theoffset
orscale
orsdfResolution
fields in a build, you also have to setmeshToSdfComponent.fieldsChanged = true
- VFX graph output - set the
- Convert the triangle mesh into voxels
- There are many "correct" ways to do this, for instance by iterating over the voxels that each triangle might intersect with and testing if it does intersect with any of them.
- But it's faster to just sample a bunch of quasi-randomly distributed points on each triangle and marking them as filled in the voxels texture, hoping we sample enough to get a good surface. I use the R_2 sequence for this.
- This would also be possible with a geometery or tesselation shader to split the triangles to be below the voxel resolution followed by marking the location of each vertex in the voxel texture as filled
- Flood fill the voxel texture using Jump Flood Assignment. This creates a voroni diagram with each voxel acting as a seed, AKA an unsigned distance field of the voxels.
- Subtract some constant from the unsigned distance field to thicken the surface a little bit.
This tutorial is primarily for the VFX graph.
- Add the MultiMeshToSDF script to an object.
- Assign the default parameters of compute shaders (see MeshToSDF prefab).
- Assign your
VFX Property
(the SDF) andVFX Transform Property
(the center and scale of the bounding box) - Assign your skinned meshes in the
Skinned Meshes
parameter.
Multi mesh builds off the core but generates a dynamically scaling 3D SDF based upon the number of skinned mesh renderers (SMR) present in the parameters. Each currently active SMR is combined into a single mesh and a single combined bounding box is created. The bounding box gives you the scale information that is then passed through to the VFX graph through the transform parameter (rotation is not needed as the bounding box is world space, a gizmo is drawn to show you the current bounding box being created). The bounding box is then scaled down slightly to ensure that it sits within the full SDF region (this is user modifiable).
It can be easily expanded to include static meshes however I had no such need for them, feel free to add them if you wish.
- Currently SDFs are hollow, however the VFX treats all SDFs as hollow anyway.
- The same sample count is used for all triangles, but smaller triangles can get away with fewer samples.
- Should benchmark to see if a dynamic length loop that samples & writes to fewer locations is faster than fixed length, unrolled by the compiler loop that samples and writes to too many.
- Since we know the vertex normals, we could write this into the voxels and have the vfx graph use the flood-filled SDF normals, rather than recomputing them per particle per timestep. However, this would require reimplementing the Conform To SDF block in the VFX graph to sample the sdf gradient, rather than computing it with a 3-tap approximation.
- Instead of each thread writing
numSamples
times into the voxel array, spawnnumTriangles * numSamples
threads and each thread writes 1 sample into the array. Maybe this is faster? - Try the geometry shader technique. It used to be annoying to do this, but apparently is easier in HDRP.
- There's no need for the dependency on the VFX graph except for the demo scene.
- The above statement is slightly less true for multi mesh.