You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"No available kernel. Aborting execution." I installed all of the requirements from requirements.txt and made sure to install torch 2.4 with cuda 12.4 enabled
#28
Open
MinervaArgus opened this issue
Oct 28, 2024
· 1 comment
(allegro) D:\PyShit\Allegro>python single_inference.py ^
More? --user_prompt "A seaside harbor with bright sunlight and sparkling seawater, with manyboats in the water. From an aerial view, the boats vary in size and color, some moving and some stationary. Fishing boats in the water suggest that this location might be a popular spot for docking fishing boats." ^
More? --save_path ./output_videos/test_video.mp4 ^
More? --vae D:\PyShit\Allegro\allegro\models\vae ^
More? --dit D:\PyShit\Allegro\allegro\models\transformer ^
More? --text_encoder D:\PyShit\Allegro\allegro\models\text_encoder ^
More? --tokenizer D:\PyShit\Allegro\allegro\models\tokenizer ^
More? --guidance_scale 7.5 ^
More? --num_sampling_steps 100 ^
More? --seed 42
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?itLoading checkpoint shards: 50%|████████████████████████████▌ | 1/2 [01:39<01:39, 99.43s/Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [03:14<00:00, 96.78s/Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [03:14<00:00, 97.17s/it]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
0%| | 0/100 [00:00<?, ?it/s]D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
hidden_states = F.scaled_dot_product_attention(
D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: Memory efficient kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:723.)
hidden_states = F.scaled_dot_product_attention(
D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: Memory Efficient attention has been runtime disabled. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen/native/transformers/sdp_utils_cpp.h:495.)
hidden_states = F.scaled_dot_product_attention(
D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: Flash attention kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:725.)
hidden_states = F.scaled_dot_product_attention(
D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: CuDNN attention kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:727.)
hidden_states = F.scaled_dot_product_attention(
D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: The CuDNN backend needs to be enabled by setting the enviornment variableTORCH_CUDNN_SDPA_ENABLED=1 (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:497.)
hidden_states = F.scaled_dot_product_attention(
0%| | 0/100 [01:47<?, ?it/s]
Traceback (most recent call last):
File "D:\PyShit\Allegro\single_inference.py", line 99, in
single_inference(args)
File "D:\PyShit\Allegro\single_inference.py", line 65, in single_inference
out_video = allegro_pipeline(
^^^^^^^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\PyShit\Allegro\allegro\pipelines\pipeline_allegro.py", line 773, in call
noise_pred = self.transformer(
^^^^^^^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\PyShit\Allegro\allegro\models\transformers\transformer_3d_allegro.py", line 331, in forward
hidden_states = block(
^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\PyShit\Allegro\allegro\models\transformers\block.py", line 1093, in forward
attn_output = self.attn1(
^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\PyShit\Allegro\allegro\models\transformers\block.py", line 553, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "D:\PyShit\Allegro\allegro\models\transformers\block.py", line 824, in call
hidden_states = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: No available kernel. Aborting execution.
The text was updated successfully, but these errors were encountered:
(allegro) D:\PyShit\Allegro>python single_inference.py ^
More? --user_prompt "A seaside harbor with bright sunlight and sparkling seawater, with manyboats in the water. From an aerial view, the boats vary in size and color, some moving and some stationary. Fishing boats in the water suggest that this location might be a popular spot for docking fishing boats." ^
More? --save_path ./output_videos/test_video.mp4 ^
More? --vae D:\PyShit\Allegro\allegro\models\vae ^
More? --dit D:\PyShit\Allegro\allegro\models\transformer ^
More? --text_encoder D:\PyShit\Allegro\allegro\models\text_encoder ^
More? --tokenizer D:\PyShit\Allegro\allegro\models\tokenizer ^
More? --guidance_scale 7.5 ^
More? --num_sampling_steps 100 ^
More? --seed 42
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?itLoading checkpoint shards: 50%|████████████████████████████▌ | 1/2 [01:39<01:39, 99.43s/Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [03:14<00:00, 96.78s/Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [03:14<00:00, 97.17s/it]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
0%| | 0/100 [00:00<?, ?it/s]D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
hidden_states = F.scaled_dot_product_attention(
D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: Memory efficient kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:723.)
hidden_states = F.scaled_dot_product_attention(
D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: Memory Efficient attention has been runtime disabled. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen/native/transformers/sdp_utils_cpp.h:495.)
hidden_states = F.scaled_dot_product_attention(
D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: Flash attention kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:725.)
hidden_states = F.scaled_dot_product_attention(
D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: CuDNN attention kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:727.)
hidden_states = F.scaled_dot_product_attention(
D:\PyShit\Allegro\allegro\models\transformers\block.py:824: UserWarning: The CuDNN backend needs to be enabled by setting the enviornment variable
TORCH_CUDNN_SDPA_ENABLED=1
(Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:497.)hidden_states = F.scaled_dot_product_attention(
0%| | 0/100 [01:47<?, ?it/s]
Traceback (most recent call last):
File "D:\PyShit\Allegro\single_inference.py", line 99, in
single_inference(args)
File "D:\PyShit\Allegro\single_inference.py", line 65, in single_inference
out_video = allegro_pipeline(
^^^^^^^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\PyShit\Allegro\allegro\pipelines\pipeline_allegro.py", line 773, in call
noise_pred = self.transformer(
^^^^^^^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\PyShit\Allegro\allegro\models\transformers\transformer_3d_allegro.py", line 331, in forward
hidden_states = block(
^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\PyShit\Allegro\allegro\models\transformers\block.py", line 1093, in forward
attn_output = self.attn1(
^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\offic\miniconda3\envs\allegro\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\PyShit\Allegro\allegro\models\transformers\block.py", line 553, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "D:\PyShit\Allegro\allegro\models\transformers\block.py", line 824, in call
hidden_states = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: No available kernel. Aborting execution.
The text was updated successfully, but these errors were encountered: