When I am trying to run my code on a new machine(on another machine it works well), there occurs an error about auto_grad and in place operation. The torch.autograd.set_detect_anomaly find that the error occurs at: F[b][ind[b][i]] += similarity[b][i][j] * Fij.permute(1, 0), where F, Fij are tensors and ind[b][i], similarity[b][i][j] are numpy ndarrays. The gradient will pass through the F, because my code is actually backprop on some region Fij of the tensor F. The regions are controlled by the index ndarray ind[b][i]

The exception message:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1024, 512]], which is output 0 of SelectBackward, is at version 31; expected version 30 instead.

my code:

```
class net(...):
def forward(similarity, ...other params):
F = torch.zeros(another_feature.shape).to(DEVICE)
for b in range(...):
for i in range(...):
for j in range(...):
Fij = ...
F[b][ind[b][i]] += similarity[b][i][j] * Fij.permute(1, 0)
return F
```

The error is triggered by F[b][ind[b][i]] += similarity[b][i][j] * Fij.permute(1, 0)

Does anyone know how to solve this problem?

Maybe this could be computed by mask select in pytorch