I’m looking at some earlier github repos where people have used this in a training loop, e.g.:

Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.Tensor
for epoch in range(0, epochs):
for i, data in enumerate(trainloader):
# Configure model input
#imgs_lr = Variable(imgs["lr"].type(Tensor))

where imgs is a dataloader. Have been reading a bit about Variable in the docs, where it is no longer used. If I want to apply this to my code, where I’m using the newest PyTorch version, is this somehow equivalent:

imgs_lr = imgs["lr"].type(Tensor)

where I don’t need to use Variable, and from the docs it will automatically go through gradient ?
And is this equivalent?

Yes you can just drop the Variable now.
The requires_grad=False is the default so you don’t need to specify it when creating your Tensor.
Also factory functions like torch.zeros() accept a dtype argument that you can use to directly create a Tensor of the right type and so avoid the .type() call.