-
Notifications
You must be signed in to change notification settings - Fork 874
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The right way of using SD XL motion model to get good quality output #382
Comments
@F0xbite Thank you for the information! |
Glad to help. The only other one that I know of is HotshotXL. Hotshot does have better visible quality, but it's limited to 8 rendered frames max and I don't think it's possible to loop context, both of which are huge caveats for me. Also the quality of the motion seems rather poor and distorted in my testing, but that's just my opinion. There's also SVD, but it's strictly a image->video model with no prompting and basically no control over motion. So unfortunately, I don't know of a better solution than the hybrid system I'm using now, until a better motion model is trained for XL or the Flux team releases some kind of text2video model. But I'm sure that's bound to change at some point. |
thanks a lot for sharing this @F0xbite, would love to use your hybrid workflow above if you could share 🔥 |
I have built on the same workflow and have exactly the same issue with seemingly low res output (while it's 1024x1024). |
foxbite_hybrid_animatediff.json |
@F0xbite Thank you for sharing! 👍 |
Hi, first I'm very grateful for this wonderful work, animatediff is really awesome 👍
I got stucked in the quality issue for several days, when I use the sdxl motion model. Although the motion is very nice, the video quality seems to be quite low, looks like pixelated or downscaled. Here is the comparation of sdxl image and animatediff frame:
These two images are using the same size configuration. I'm using the comfyUI workflow adopted here: https://civitai.com/articles/2950, with Animagine XL V3.1 model & vae(you can save the image below and import in comfyui):
I tried with different number of steps / with&height settings / sampler / guidance, but got no luck.
I know the sdxl motion model is still in beta, but I can't get the same good result as the example in Readme. Is there anything I'm doing wrong here 😢 Could anyone show the right way of using the sdxl model? Thank you in advance.
The text was updated successfully, but these errors were encountered: