Update Magnum projects and switch to CgltfImporter from TinyGltfImporter #1549
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation and Context
This brings mainly a switch to
CgltfImporter
fromTinyGltfImporter
, which should be a drop-in replacement with equivalent functionality but vastly improved efficiency. More below. Besides that, there's a few web-focused improvements to multidraw functionalityCgltfImporter
The plots below show parsing of about 600 materials from a half-gigabyte file using
magnum-sceneconverter --info-materials --profile
. (Importing meshes is mostly about dealing with the data blobs, where the work doesn't really differ between the two plugins, so that's not shown here.)The new plugin is built on top of a library that not only gives Magnum full control over how data are loaded into memory (unlike TinyGltf, which copied everything into its memory), but also is capable of parsing the JSON in-place, without copying its contents to its own structures. That results in pretty nice implicit memory savings. The second vs third row in the two plots shows the difference between
TinyGltfImporter
andCgltfImporter
JSON parsing speed and memory usage given the same conditions. But wait, there's more!Because the library no longer populates its own internal state with a copy of the input data, this finally enables a workflow where the importer can be told to operate directly on an externally owned memory using a new
openMemory()
API. That's the fourth row in the plots, and the saved copy is a pretty significant saving on its own.And finally, memory-mapping a file and opening it using
openMemory()
leads to only the actually touched parts of the file being paged into memory. In case of the 439.5 MB file, it only needed to read and parse the JSON, which resulted in just 6.7 MB of actual physical memory being needed. (Yes, similar memory savings could be made if the file was read byte-by-byte and seeked as appropriate, but that's quite labor intensive and with all the filesystem calls it wouldn't get anywhere near the speed of just accessing a memory-mapped file.)Note that, in order to make the PR as non-invasive as possible, I didn't change any
openFile()
calls to mmap +openMemory()
-- that's something the resource amanger has to be aware of, ensuring the memory-mapped file doesn't get close for as long as the importer is used.The final step in this direction (and what gets used by the batch-rendering-friendly import pipeline) is about making also
MeshData
imports zero-copy. In short, instead of the importer returning aMeshData
with a copy of given mesh data, it would directly reference the (memory-mapped) file that got passed toopenMemory()
. Most of the work is already in place, the remaining bits are happening in mosra/magnum#240.How Has This Been Tested
Types of changes