Even though I use torch.no_grad(): one of the variables needed for gradient computation has been modified

I return all activations like this:

def forward(self, input):
        x = input
        allActivations = {}
        for i in range(self.n_layers):
            x = getattr(self, 'layer_' + '{0:02d}'.format(i))(x)
            allActivations['layer_' + '{0:02d}'.format(i)] = x
        return x, allActivations

No I want to set them to a different value like this:

        with torch.no_grad():
            for key, value in allActivations.items():
                value.div_(someValue)

Do I need to call torch.no_grad or not? Thanks for your help!

The torch.no_grad() guard will make sure that Autograd doesn’t track any new operations applied inside the block. However, inplace operations on tensors, which are needed for gradient computation, can still be disallowed.
In your particular case a specific value tensor is needed during the backward pass to calculate the gradient, while you are modifying it inplace, which would yield the wrong result.