Hi,

in my customised Loss function, I need to change the value of net output in positions where a condition is met. And I must use operations that guarantee a correct back propagation.

The code:

`output[target != 1] = 1 - output[target != 1] `

throws an error: “RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation”

I am wandering if I can bypass this error by using bultin torch functions.

I tried to use:

```
output = torch.where(target != 1, 1 - output[target != 1] , output)
```

but I now get the error:

“RuntimeError: The size of tensor a (800) must match the size of tensor b (3686385) at non-singleton dimension 3”

I think this is due to the fact that in torch.where(condition, a, b)

a and b must be scalars. They cannot be matrices. Is this correct?

Any suggestion for solving my problem?