You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, very nice work! I was exploring a bit with the demo and was wondering if there is potential to apply this to character/facial 3D generation? It works pretty well on objects in a one-shot setting (especially when they have a good axial symmetry), but it is much more challenging on generation of humans/characters.
Do you think, for instance, that in a few shot setting the diffusion model that generates the novel 2D views can be skipped by providing multiple views at the start? Or, would it be doable to adopt a fine-tuning approach on a dataset specifically focused on 3D characters rendering? Thanks a lot!
The text was updated successfully, but these errors were encountered:
Thank you for your interest. Adding characters is on our roadmap. Yes, providing multiple views is possible, as long as they are 3D consistent. We are considering both specific datasets and potential algorithms to handle characters. Please stay tuned.
Hello, very nice work! I was exploring a bit with the demo and was wondering if there is potential to apply this to character/facial 3D generation? It works pretty well on objects in a one-shot setting (especially when they have a good axial symmetry), but it is much more challenging on generation of humans/characters.
Do you think, for instance, that in a few shot setting the diffusion model that generates the novel 2D views can be skipped by providing multiple views at the start? Or, would it be doable to adopt a fine-tuning approach on a dataset specifically focused on 3D characters rendering? Thanks a lot!
The text was updated successfully, but these errors were encountered: