Confusion about torch.cuda.device context mamanger

Hey folks,
I was wondering if someone could clarify some aspects of the cuda context manager.

For instance lets say we have dummy function, and we want to either send the whole function to cuda or whatever tensors are created/manipulated inside that dummy function to automatically get sent to cuda.

def dummy_func(input, output):
    y = input * output
    tmp = torch.ones(input.size(0), output.size(2))
    y.scatter_(1, tmp, 1)

What I would like is everythin inside dummy_func to operate on cuda without having to explicity add to each new tensor .to(device) or .cuda()

What I’ve tried are the following

@torch.cuda()
def dummy_func(...):

and

def dummy_func(....):
    with torch.cuda.device(0):
           .....

image

But none of those gave the expected outcome.
Is there any way to achieve that in order to avoid having to manually add .cuda to every possible tensor?

Hi,

The torch.cuda.device() context is to change the defaut cuda device so that any cuda-related function will use that device.

We don’t have anything that forces all Tensors to move to cuda. This is mainly because this is a quite expensive operation and the user should be aware of it and avoid moving Tensors back and forth between the CPU and GPU.

I understand but think of the following scenario.
We have a model named net(inputs, targets) taking inputs and targets.
So far so good things are easy we can send net.to(device) and inputs.to(device), targets.to(device) accordingly.

Things start to get hairier when we want to manipulate some aspects of model as shown in the example above in def dummy_func.

Every torch.ones, torch.zeros whatnot has to be accompanied by .cuda.

Which most of the time will raise errors if we by mistake have at least one tensor that is on cpu and the rest on gpu and vice versa.

I think this defeats the purpose of model.to(device), does’t it?

We would expect that whatever is happening inside the model (whatever ops) to be sent to device, no?

But you will see the same problem if you change the input type.
In general, you want to do:

tmp = torch.ones(input.size(0), output.size(2), dtype=input.dtype, device=input.device)

to make sure you get a Tensor of the same type and same device as input.
If you have the same size (not the case for this example), you can do tmp = torch.ones_like(input) to get the same size/dtype/device.

Ok, I didn’t know that functions torch.ones_like would automatically put things on same device. Thanks!
But I think that we’re still missing something

with torch.cuda(device):
   whatever happens here make them automatically on cuda device defined

or something like

@torch.cuda(device)
def some_func():
  send whatever is in here to cuda device

Unless there exists something for such cases and I’ve completely missed it?

There isn’t but that’s by choice :smiley:

You could do something like torch.set_default_tensor_type(torch.cuda.FloatTensor) but this is strongly advised against. Anything you will create will be on the GPU (temporary stuff for printing, internal buffers,…) and you most likely don’t want these on the GPU as ops on them will be much slower.