autograd.Variable in training loop

Hi,

I’m looking at some earlier github repos where people have used this in a training loop, e.g.:

Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.Tensor
for epoch in range(0, epochs):
    for i, data in enumerate(trainloader):
        # Configure model input
        #imgs_lr = Variable(imgs["lr"].type(Tensor))

where imgs is a dataloader. Have been reading a bit about Variable in the docs, where it is no longer used. If I want to apply this to my code, where I’m using the newest PyTorch version, is this somehow equivalent:

imgs_lr = imgs["lr"].type(Tensor)

where I don’t need to use Variable, and from the docs it will automatically go through gradient ?
And is this equivalent?

fake = Variable(Tensor(np.zeros((LR.size()[0],*discriminator.output_shape))),requires_grad=False) 
  fake = torch.zeros((LR.size()[0],*discriminator.output_shape), requires_grad =False).type(Tensor)

Hi,

Yes you can just drop the Variable now.
The requires_grad=False is the default so you don’t need to specify it when creating your Tensor.
Also factory functions like torch.zeros() accept a dtype argument that you can use to directly create a Tensor of the right type and so avoid the .type() call.

If requires_grad=False by default, shouldn’t my imgs_lr = imgs["lr"].type(Tensor) have set it to True, or am I misunderstanding?

Is it also acceptable to use .type(Tensor), or do I need to use dtype argument instead?

You don’t usually require gradients for the input to train neural networks. The parameterd inside your model are requires_grad=True though.

You can use any type conversion functions. They are a few of them for historical reasons, but there is no benefit of using one against the other.