Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using NVIDIA RTX 3090 GPU? #26

Open
znmeb opened this issue Dec 16, 2021 · 0 comments
Open

Using NVIDIA RTX 3090 GPU? #26

znmeb opened this issue Dec 16, 2021 · 0 comments

Comments

@znmeb
Copy link

znmeb commented Dec 16, 2021

I'm fortunate enough to have a machine with an NVIDIA RTX 3090 GPU. However, the GPU-enabled binary versions of PyTorch 1.6.0 available from the PyTorch project won't run on the 3090, and probably won't run on any 3000 series GPUs - the necessary CUDA binaries don't seem to be compiled in.

PyTorch 1.7.0 does run on my 3090, so I've built a virtual enviroment with that and torchaudio 0.7.0. I started up training on the "LJ" dataset to see if it worked and it appeared to be functioning; it used about 11.5 GB of GPU RAM and about 45% of GPU processors. Do you anticipate any other problems with PyTorch 1.7.0, or should I go ahead with training on my own dataset?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant