Update weight with same netowork's output

I want to update the classifier’s weights twice with the two outputs of the classifier.
To update, I wrote a code.
But, the code gives me the error that ’ enable anomaly detection to find the operation that failed to compute its gradient,’

I saw the answer that this code works with the previous version of pytorch. But it seems weird.
Can you tell me where I should fix it?
I don’t want to backward with (foreProb + backProb).backward()

foreData = data * mask
backData = data * (1-mask)

foreOutput = myClassifier(foreData)
backOutput = myClassifier(backData)                

foreProb = nn.CrossEntropyLoss(foreOutput, target)
backProb = nn.CrossEntropyLoss(backOutput, target)

self.optimizer['classifier'].zero_grad()
foreProb.backward()
self.optimizer['classifier'].step()
                
self.optimizer['classifier'].zero_grad()
backProb.backward()
self.optimizer['classifier'].step()

It seems your code tries to calculate the gradients in the second backward pass using “stale” intermediate forward activations, since the parameters were already updated, which is wrong. This post explains it in more detail.

Yes, I saw that post.
But, still gives me the error although I fix the code to

self.optimizer['classifier'].zero_grad()
foreProb.backward(retain_graph=True)
self.optimizer['classifier'].step()
                
self.optimizer['classifier'].zero_grad()
backProb.backward()
self.optimizer['classifier'].step()

The error is
one of the variables needed for gradient computation has been modified by an inplace operation
This error doesn’t appear with the old pytorch version…

That wouldn’t be a fix, as it’s still using the wrong behavior. Previous PyTorch versions allowed this wrong gradient calculations, which is why no errors were raised.