Which device is model / tensor stored on?

Hi,
I have such a simple method in my model

    def get_normal(self, std):
        if <here I need to know which device is used> :
            eps = torch.cuda.FloatTensor(std.size()).normal_()
        else:
            eps = torch.FloatTensor(std.size()).normal_()
        return Variable(eps).mul(std)

To work efficiently, it needs to know which device is currently used (CPU or GPU).

I was looking for something like model.is_cuda - but different tensors can be placed on different devices, so probably there is something like std.device_context, but I haven’t found such method either.

What is recommended to handle such situations?

7 Likes

So far I am using std.is_cuda (which seems to be ok solution if there is one GPU device), but better options are welcome.

The common way is to start your code with:

use_cuda = torch.cuda.is_available()

Then, each time you create a new instance of any tensor/variable/module, just do:

if use_cuda:
    my_obect.cuda()

That way you make sure that everything is stored or not on GPU or CPU (by default, without calling .cuda() it will be on CPU)

The common way is to start your code with:

Yup, I’ve noted this (this code is based on pytorch examples).

However, there may be different possible situations to handle and I want the model code to be quite isolated from the environment (e.g. place it in a separate python module), and this doesn’t look like a general solution.

Just came up with a better idea: tensor.new(sizes).normal_(0, 1) seems to be the right way to get gaussian noise on the right device.

This is a feature wanted for quite a long time. I don’t see any reason why pytorch hasn’t provided API simply as .device() to return the device a model/variable/tensor resides on

8 Likes

I guess you could use this:

cuda_check = my_tensor.is_cuda
if cuda_check:
    get_cuda_device = my_tensor.get_device()
4 Likes

Which version are you using. For 0.2.1 release, I don’t find .get_device() API

You can only use the get_device() API if the Tensor is a CUDA Tensor.

3 Likes

OK, I get it, thanks a lot

I would also appreciate it to have a .device() function I could call, that would work for models, tensors, etc

4 Likes

It would be convenient in many cases if we have a method device() that returns the exact device that the model/tensors are located, then

some_tensor.to(some_model.device())

would be an elegant solution for many functions (which accept a model as input and perform some inference on the model).

5 Likes

You could try

a = torch.randn(10).to('cuda:0')
b = torch.randn(10).to(a.device)
17 Likes

Is there a way to get the device of a module (a torchvision model or criterion) ?

1 Like

Yes, although it might be a bit misleading in some special cases.
In case your model is stored on just one GPU, you could simply print the device of one parameter, e.g.:

print(next(model.parameters()).device)

However, you could also use model sharding and split the model among a few GPUs.
In that case you could have to check all parameters for their device.

12 Likes

I am am using copy.deepcopy to instantiate multiple instances of the same model (including same initialization parameters).
I would like to reaffirm if deepcopy copies this storage property, so I’d still have to check somehow.

1 Like