The loss part created by numpy would be treated as a constant value, similar to:
loss = 0.5 * valid_pytorch_loss + 1.
where the 1.
could be any value provided by the numpy loss function and would thus not be used in calculating the gradients.
The error is not raised, since the PyTorch loss is still valid and you are of course free to add constant values to it.