I am trying to do this:
with torch.no_grad():
for param in net.parameters():
for j in param.flatten():
print("current j", j)
j += 1
However I get this error:
RuntimeError: A view was created in no_grad mode and its base or another view of its base has been modified inplace with grad mode enabled. Given that this use case is ambiguous and error-prone, it is forbidden. You can clarify your code by moving both the view and the inplace either both inside the no_grad block (if you don’t want the inplace to be tracked) or both outside (if you want the inplace to be tracked).
Can someone explain to what has grad mode enabled? The whole block is under no_grad(). Also, the first iterations runs well. I get this output:
current j tensor(0.0919, requires_grad=True)
current j
However in the second iteration of the loop I get the error message above. Why is this happening only in the second iteration?