Is torch.Tesnor with requires_grad=True equal to Variable?

I’m new to PyTorch and AI in general. I have seen that Torch computational graph is created(we are able to compute gradients) when set use requires_grad=True. But, most of the code on GitHub and on Tutorials I’ve used use Variables. But, from reading I can’t see a difference.
What is the difference, if there is any?
Thanks for explanation or link!

In the latest versions of pytorch, Variable() is no more a thing. You can simply use torch.tensor([...]).requires_grad_(True)

1 Like

You’re looking at very old code, as Variable was removed well before the 1.x.x release of pytorch. A Variable was a Tensor that required grad, but now you can just have tensors require grads directly without the wrapper. I’d suggest trying to find more up-to-date examples while you’re learning though, as there have been more than a few changes since variable was removed and you’ll likely find other points of friction trying to update old code whilst you’re learning it.