Converting NN outputs to Long in custom Loss function

Hi,
I wanted to ask that is it possible to convert an output of a NN to a long dtype in the custom loss function and still get a non-zero gradient?
As per my understanding, any of the operations like floor/ceil will give me a zero gradient.

This won’t work, as Autograd cannot differentiate through LongTensors and you would therefore detach the computation graph as seen here:

x = torch.randn(1, requires_grad=True)
y = x * 1
print(y.grad_fn)
> <MulBackward0 object at 0x000001D991AB65E0>

z = y.long()
print(z.grad_fn)
> None

You could use round, ceil or floor instead.