Empty tensor to device

    temp = torch.empty(x_size)
    temp.to(device)
    print('device temp', temp.device)

I ran the above code on a gpu server, where device is cuda. But it gives me a cpu tensor. Why?

You have to assign the tensor:

temp = temp.to(device)

nn.Modules will be pushed inplace to the specified device, while tensors need the assignment.

1 Like