PyTorch is rather modular. Any scalar value can be a loss, so any function of tensors, class labels, or parameters that returns a singular value is a loss function.
The only thing to it beyond coding style is to consider the convexity of the function and it’s limiting behavior (how does it look for extreme values?) and numerical stability (would you run into precision issues with float 32 numbers?).