Hi, I am a newcomer to PyTorch.
cuda() give a better performance since it’s on GPU, I wonder if I can call other module during CUDA calculation.
I have known that currently we can not use numpy directly in GPU mode, and if I want to do that I have to use
.cpu().numpy() to migrate the data to CPU first.
So I am curious about other module. For example, just like using
queue module as following:
import queue x=torch.cuda.FloatTensor([1.0]) q=queue.Queue() q.put(x)
Above code didn’t get any errors. So does it run on GPU exactly? Which means I have created a Queue object on the GPU? And is it advisable to do that? Does it mean that some modules are supported by torch cuda but others are not ?
What’s more, if I hard code a list, dict or something else (eg.
x=[1,2,3,4,5]) in my model code, will it (
list[1,2,3,4,5]) be migrated to GPU memory automatically when use
Thank you very much.