Theory on back propagation: whether the loss function can be defined freely?

Hi all, due to lack of basic theory, one thing I am not sure about is, whether the network can back propagate or not depends only on the model design itself, and has nothing to do with the choice of loss function?

This means that as long as the network is properly designed and the loss function can provide a result, it will always optimize the result towards it. So there is only the question of whether the loss function is suitable for the task, without the worry that the network will not be trained due to the wrong choice of loss function?

No, the loss function has to be written using differentiable operations or at least be able to yield a valid gradient (the same applies to the model of course as well).