Which device is model / tensor stored on?

Yes, although it might be a bit misleading in some special cases.
In case your model is stored on just one GPU, you could simply print the device of one parameter, e.g.:

print(next(model.parameters()).device)

However, you could also use model sharding and split the model among a few GPUs.
In that case you could have to check all parameters for their device.

12 Likes