-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Here is how to run this on Mac with Apple Silicon #21
Comments
Hey @YAY-3M-TA3, when performing inference with mps, what precision did you use? |
To fit in 24g, I used torch.float16. I was finally able to solve the black screen video by upcasting the key to float32 in the motion_module attention function. So, now I have everything running on MPS. I can get images from 480x480 and below processing in 7-17 secs per frame (25 steps 16 frame animation)... Animations can process in 2 to 8 minutes. My problem now, I can't process 512x512 images as fast(They take as long as CPU processing because it doesn't fit in memory - so caches...) (Ideally I want this size since the model was trained on that.) So, now I'm looking for things I can optimize, memory-wise... |
Refers to https://huggingface.co/docs/diffusers/optimization/mps, maybe you can use |
how do you solve the dependency of decord ? |
Here is how to run this on Mac with Apple Silicon
In a terminal window:
Now install this special version of Pytorch, torchvision and torchaudio, to allow for conv3D on apple silicon:
For the dependancies, create a new text file called requirements.txt and copy this into that text file:
Now install requirements by typing:
pip install -r requirements.txt
Next, open this link merges.txt in a browser and save the merges.txt file to PIA-cpu
Next, you need to do some code modification:
in file, PIA-cpu/animatediff/pipelines/i2v_pipeline.py
lines 270-273 - change to these:
Finally, you need to make a change to a module. to find the module, type:
conda env list
Then look for the path with pia-cpu
pia-cpu /Users/name/miniforge3/envs/pia-cpu
the file to modify is in /lib/python3.10/site-packages/transformers/models/clip/tokenization_clip.py
add line 303 to tokenization_clip.py and save:
merges_file = "merges.txt"
now to run PIA gradio demo, go to the PIA-cpu folder, run:
python app.py
NOTE: This will run on MAC CPU only
(Lighthouse example will take 42 minutes on a Mac M2 with 24G)
I tried to move everything to MPS, but it only rendered a black video... I dont know why....
The text was updated successfully, but these errors were encountered: