Converting NN outputs to Long in custom Loss function

I wanted to ask that is it possible to convert an output of a NN to a long dtype in the custom loss function and still get a non-zero gradient?
As per my understanding, any of the operations like floor/ceil will give me a zero gradient.

This won’t work, as Autograd cannot differentiate through LongTensors and you would therefore detach the computation graph as seen here:

x = torch.randn(1, requires_grad=True)
y = x * 1
> <MulBackward0 object at 0x000001D991AB65E0>

z = y.long()
> None

You could use round, ceil or floor instead.