Creating Tensor on the CPU and then transfer to GPU

Hi,

This might be a fundamentally absurd question, however, I just wanted to understand what am I not understanding right.

So, to use the GPU to train the model, I did the following obvious things:

inputs,labels = data[0].to(torch_device),data[1].to(torch_device)

and this worked well.

However, before this, I had tried the following:

inputs,labels = data
inputs.to(torch_device)
labels.to(torch_device)

but this did not work as when I checked

input.is_cuda

it returned false.

Now, one more look into it and I understood the difference - input was already a tensor and I was trying to transfer that to the gpu, which fails, whereas when I transfer data, a list, first to the gpu and then create the tensor there, that works perfectly.

Is there a specific reason why a tensor created on the cpu can’t be transferred as is to the GPU?

Thanks

.to() is not in-place operation. U have to use inputs=inputs.to(torch_device).

Oh right! Thanks. I assumed the ‘Unified Memory’ paradigm and thought that it would be taken care of.

Now all makes sense. It has got nothing to do with tensors or data.

Thanks,
Gaurav