Hi,
The 0.4 version deprecated the keyword volatile. I don’t understand the official document thoroughly. Can you explain the relationship among volatile, requires_grad, and with torch.no_grad()?In addition, how to modify the following two pieces of code when volatile=true or false. Thank!
def make_variable(tensor, volatile=False):
"""Convert Tensor to Variable."""
if torch.cuda.is_available():
tensor = tensor.cuda()
return Variable(tensor, volatile=volatile)
# evaluate network
for (images, labels) in data_loader:
images = make_variable(images, volatile=True)
labels = make_variable(labels).squeeze_()
preds = classifier(encoder(images))
loss += criterion(preds, labels).item()
pred_cls = preds.data.max(1)[1]
acc += pred_cls.eq(labels.data).cpu().sum()
loss /= len(data_loader)
acc /= len(data_loader.dataset)
print("Avg Loss = {}, Avg Accuracy = {:2%}".format(loss, acc))
You can remove the usage of Variables completely.
The equivalent to volatile=True is the with torch.no_grad() block, so you should use it if you want to save memory and don’t want to calculate gradients for these operations.