X.to(device) vs Variable(x)

I have seen tutorials use x.to(device), which I understand sends x to the computation device (cpu/gpu) and I have seen others use Variable(x), which I understand is for autograd, without calling .to(device). Does x.to(device) implicitly create an autograd variable? Or can you use Variable(x).to(device)?

Variables are deprecated since PyTorch 0.4 and you should use tensors now.
Autograd is able to track operations of tensors, if they require gradients, so there is no need to use the tensor vs. Variable split anymore.

1 Like