Asymmetric L1 Loss

Is it problematic to back propagate through your own loss function instead a predetermined one in nn library?

I am building a deep learning model for image translation, and I want to place a higher loss on overestimation (I don’t want overestimation). Currently, I am just taking the difference, applying ReLU, and adding it to the L1Loss. However, when I do this, my validation error becomes pretty bad. Any particular reasons for this? Thanks!