Which Python module can I use after calling model.cuda()?

Hi, I am a newcomer to PyTorch.

As cuda() give a better performance since it’s on GPU, I wonder if I can call other module during CUDA calculation.

I have known that currently we can not use numpy directly in GPU mode, and if I want to do that I have to use .cpu().numpy() to migrate the data to CPU first.

So I am curious about other module. For example, just like using queue module as following:

import queue
x=torch.cuda.FloatTensor([1.0])
q=queue.Queue()
q.put(x)

Above code didn’t get any errors. So does it run on GPU exactly? Which means I have created a Queue object on the GPU? And is it advisable to do that? Does it mean that some modules are supported by torch cuda but others are not ?

What’s more, if I hard code a list, dict or something else (eg. x=[1,2,3,4,5]) in my model code, will it (list[1,2,3,4,5]) be migrated to GPU memory automatically when use model.cuda() ?

Thank you very much.

What are you trying to achieve using the queue?
Have a look at the Multiprocessing Best Practices, as it might get tricky.

No, plain Python types won’t be pushed to the GPU. You would have to create a tensor and register it as an nn.Parameter (if trainable) or buffer (using self.register_buffer, if not trainable).

Thank you very much! Actually I’m trying to implement a simple breadth first search algorithm, I’ll try if I can speed it using torch.multiprocessing.