Modulo Linear Activation

I am looking to compare methods for bounding the range of neural network output for regression.

A naive thought is to use the modulo function (extended to the reals). It appears that torch.fmod is not-differentiable and I assume torch.remainder is not as well.

My question is whether my assumption is correct and whether it would be possible to implement the gradient for these functions?

Referencing this post, it appears that the gradient is defined on non-integer quotients. Would it be possible to implement some sort of approximation of the gradient close to integer valued quotients? Would this gradient still be useful?