I’m looking at some earlier github repos where people have used this in a training loop, e.g.:
Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.Tensor for epoch in range(0, epochs): for i, data in enumerate(trainloader): # Configure model input #imgs_lr = Variable(imgs["lr"].type(Tensor))
where imgs is a dataloader. Have been reading a bit about Variable in the docs, where it is no longer used. If I want to apply this to my code, where I’m using the newest PyTorch version, is this somehow equivalent:
imgs_lr = imgs["lr"].type(Tensor)
where I don’t need to use Variable, and from the docs it will automatically go through gradient ?
And is this equivalent?
fake = Variable(Tensor(np.zeros((LR.size(),*discriminator.output_shape))),requires_grad=False) fake = torch.zeros((LR.size(),*discriminator.output_shape), requires_grad =False).type(Tensor)