I am encountering an issue with my PyTorch model that has been moved to the GPU using
to('cuda:0'). When the model creates a new tensor inside its
forward method, the tensor does not get placed on the GPU and instead remains on the CPU.
To better illustrate the problem, I have provided an MWE below:
def forward(self, x):
y = torch.Tensor(x.shape)
y[::2] = x[::2]
y[1::2] = x[1::2]
return x, y
model = Model().to('cuda:0')
z = torch.rand(4).to('cuda:0')
x, y = model(z)
The output of this code indicates that
x is located on the GPU (
y remains on the CPU (
I was expecting both
y to be located on the GPU (
cuda:0). Can someone please help me understand why the newly spawned tensor
y does not get placed on the GPU even though the model is on the GPU?
I’ve also tried
y = torch.nn.Parameter(torch.Tensor(x.shape), requires_grad=False), but it didn’t work too!
I appreciate any insights or suggestions to resolve this issue. Thank you for your assistance!
Tensors are placed on CPU by default. The forward function does not change the default.
You can try
y = torch.empty_like(x) or
y = torch.empty(x.shape, device=x.device) (or use the zeros instead of empty version if desired).
Thanks, I thought the model parameters would be saved on the same device with it, I mean in case of
That’s not necessarily true. While
.to() will be applied recursively to all registered submodules, parameters, and buffers, you are still free to move different parameters to different devices, e.g. to implement model sharding. The model thus does not have a
.device attribute, but every parameter and buffer etc.
I’m unsure what you want to tell with your image since this behavior is expected as already mentioned. You are also neither using the
_like methods nor pass the
device attribute to the tensor creation but instead are using the same failing code from before.
Thank you for your consideration. I have sent the picture for this phrase:
That’s not necessarily true. Perhaps there was a misunderstanding on my part.
By the way, as I mentioned before, I expected to get the tensor on the GPU when the model was on the GPU, but I was mistaken. I have resolved my issue with