Torch.where to change output in a customised loss function

Hi,
in my customised Loss function, I need to change the value of net output in positions where a condition is met. And I must use operations that guarantee a correct back propagation.

The code:
output[target != 1] = 1 - output[target != 1]
throws an error: “RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation”

I am wandering if I can bypass this error by using bultin torch functions.

I tried to use:

output = torch.where(target != 1, 1 - output[target != 1] , output)

but I now get the error:

“RuntimeError: The size of tensor a (800) must match the size of tensor b (3686385) at non-singleton dimension 3”

I think this is due to the fact that in torch.where(condition, a, b)
a and b must be scalars. They cannot be matrices. Is this correct?

Any suggestion for solving my problem?

You can very probably bypass the error by cloning output before changing it inplace:

output = output.clone()
output[target != 1] = 1 - output[target != 1] 

The where function works with matrices but the sizes of the parameters have to be the same (or scalar). Here 1 - output[target != 1] and output have different sizes. You should do:

output = torch.where(target != 1, 1 - output, output)

But if I operate on a clone, does this operation back propagate?