Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using TPUs #554

Open
hsleiman1 opened this issue Oct 2, 2023 · 1 comment
Open

Using TPUs #554

hsleiman1 opened this issue Oct 2, 2023 · 1 comment
Labels
gpu question Further information is requested

Comments

@hsleiman1
Copy link

Hello,

If we plan to use TPUs instead of GPUs, is it possible with the current config or shall we use a different configuration?

Thanks

@beniz
Copy link
Contributor

beniz commented Oct 2, 2023

Hi, my understanding from Pytorch/Google TPU doc is that it requires importing XLA and creating a device. So I believe the devic

# imports the torch_xla package
import torch_xla
import torch_xla.core.xla_model as xm

and

device = xm.xla_device()

Then change the device here: https://github.com/jolibrain/joliGEN/blob/master/models/base_model.py#L87
It's also certainly needed to block certain calls under the use_cuda config calls in train.py and models/base_model.py.

We can look at it, good feature to have!

@beniz beniz added question Further information is requested gpu labels Oct 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
gpu question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants