How to check if Model is on cuda

When I have an object of a class which inherits from nn.Module is there anyway I can check if I the object is on cuda or not. We can do this for tensors by calling var_name.is_cuda however no such variable is available for modules.

14 Likes

As replied on the github issues, an easy way is:

next(model.parameters()).is_cuda # returns a boolean
59 Likes

Hi, is this still the preferred way in 0.4? Thanks.

I tested it right now, and it works even in pytorch v0.4

But why is it even necessary? If a model is on cuda and you call model.cuda() it should be a no-op and if the model is on cpu and you call model.cpu() it should also be a no-op.

Yes, it will work because it is verifying the type of the model weights, but I was wondering if there’s a new attribute similar to model.device as is the case for the new tensors in 0.4.

But why is it even necessary? If a model is on cuda and you call model.cuda() it should be a no-op and if the model is on cpu and you call model.cpu() it should also be a no-op.

It’s necessary if you want to make the code compatible to machines that don’t support cuda. E.g. if you do a model.cuda() or a sometensor.cuda(), you will get a RuntimeError.

Personally, I develop and debug 99% of the code on macOS, and then sync it over to a headless cluster, which is why this pattern is useful to me, for example.

if there’s a new attribute similar to model.device as is the case for the new tensors in 0.4.

Yes, e.g., you can now specify the device 1 time at the top of your script, e.g.,

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") 

and then for the model, you can use

model = model.to(device)

The same applies also to tensors, e.g,.

for features, targets in data_loader:
    features = features.to(device)
    targets = targets.to(device)
11 Likes

It’s an old conversation but I just had a similar question and found this in the doc notes on best practices within CUDA semantics, so thought of leaving it here in case it helps others:

This is the recommended practice when creating modules in which new tensors need to be created internally during the forward pass.

where ‘this’ refers to using one of the new_* methods for creating tensors that preserve the device context.

3 Likes

You can get the device by:

next(network.parameters()).device
16 Likes

why doesn’t:

self.device

work if my current object is a nn.module?

(Pdb) self.device
*** torch.nn.modules.module.ModuleAttributeError: 'MyNN' object has no attribute 'device'
1 Like

.device is a tensor attribute as described in the docs and is not set for the nn.Module, since modules can have parameters and buffers on different and multiple devices.

10 Likes

so is next(network.parameters()).device simply getting the device name for the first parameter of the nn module?

4 Likes

Yes, that’s correct.

3 Likes