About “Deprecation of the volatile flag”

The 0.4 version deprecated the keyword volatile. I don’t understand the official document thoroughly. Can you explain the relationship among volatile, requires_grad, and with torch.no_grad()?In addition, how to modify the following two pieces of code when volatile=true or false. Thank!

def make_variable(tensor, volatile=False):
    """Convert Tensor to Variable."""
    if torch.cuda.is_available():
        tensor = tensor.cuda()
    return Variable(tensor, volatile=volatile)
    # evaluate network
    for (images, labels) in data_loader:
        images = make_variable(images, volatile=True)
        labels = make_variable(labels).squeeze_()

        preds = classifier(encoder(images))
        loss += criterion(preds, labels).item()

        pred_cls = preds.data.max(1)[1]
        acc += pred_cls.eq(labels.data).cpu().sum()

    loss /= len(data_loader)
    acc /= len(data_loader.dataset)

    print("Avg Loss = {}, Avg Accuracy = {:2%}".format(loss, acc))

Instead defining the volatile flag in the Variable you are now wrapping code blocks into a with torch.no_grad() block.

Your code snippet would thus change to:

with torch.no_grad():
    # evaluate network
    for (images, labels) in data_loader:
        labels = labels.squeeze_()

        preds = classifier(encoder(images))
        loss += criterion(preds, labels).item()

        pred_cls = preds.max(1)[1]
        acc += pred_cls.eq(labels).cpu().sum()

Also, don’t use the .data attribute anymore, as it might have unwanted side effects.

Thanks for your reply. Can volatile=False be the default value and can be removed directly? Or can the entire first paragraph of code be removed?

You can remove the usage of Variables completely.
The equivalent to volatile=True is the with torch.no_grad() block, so you should use it if you want to save memory and don’t want to calculate gradients for these operations.

So how to modify it when encountering volatile=False or requires_grad ?

I’ve posted a code snippet in my previous post.
Isn’t this working for you?

It can work. Thank you very much. What I asked above is an additional question.

In that case I misunderstood the question.
For volatile=False you can just use plain tensors without using the no_grad guard.