How to keep the tensor _version not changed?

Hi all,
I am updating my model in 2 steps: 1) variable optimization; 2) projection to the optimization constraints. I found the _version of the tensor changed after the projection. So the step 1) optimization on the wrong tensor version. The codes are shown below. Is it possible to keep the tensor _version not changed? Thanks~

optimizer = torch.optim.SGD([x], lr=0.3)
For i in range(iteration):
    optimizer.zero_grad()
    probs = myfunc(x)
    probs.backward()
    optimizer.step() # optimization on x._version = 0

    x = projection(x) # _version number will be increased here

A not that elegant way is to move the definition of optimizer inside the For loop. Will this generate thousands of versions of the tensor x in the memory?

For i in range(iteration):
    optimizer = torch.optim.SGD([x], lr=0.3)
    
    optimizer.zero_grad()
    probs = myfunc(x)
    probs.backward()
    optimizer.step() 

    x = projection(x) # _version number will be increased here

Not all at the same time. But with this, the optimizer.step is literally x -= lr * x.grad and the trick will break as soon as you have momentum etc.

I would have to ask: why do you want _version to be unchanged? To my mind, it should be updated – you are wanting to modify x after all. _version exists in order to let autograd detect when it needs something for the backward that has been changed after it has been recorded during the backward (i.e. is invalid for autograd purposes now).

The generic idiomatic thing to do IMO is

    # your stuff here
    optimizer.step()
    with torch.no_grad():
        x.copy_(projection(x))

Of course if you change projection to update x in-place, you can skip the copy_.

Best regards

Thomas