We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello,
If we plan to use TPUs instead of GPUs, is it possible with the current config or shall we use a different configuration?
Thanks
The text was updated successfully, but these errors were encountered:
Hi, my understanding from Pytorch/Google TPU doc is that it requires importing XLA and creating a device. So I believe the devic
# imports the torch_xla package import torch_xla import torch_xla.core.xla_model as xm
and
device = xm.xla_device()
Then change the device here: https://github.com/jolibrain/joliGEN/blob/master/models/base_model.py#L87 It's also certainly needed to block certain calls under the use_cuda config calls in train.py and models/base_model.py.
use_cuda
train.py
models/base_model.py
We can look at it, good feature to have!
Sorry, something went wrong.
No branches or pull requests
Hello,
If we plan to use TPUs instead of GPUs, is it possible with the current config or shall we use a different configuration?
Thanks
The text was updated successfully, but these errors were encountered: