Tensor.long() lose requires_grad

Hi,

i have a question regarding the tensor type conversion.
I have a tensor x, which requires_grad, and it is computed from the leaf tensor, which is created by me.
For some reason, i need the tensor x in type of long. So i applied x = x.long(). However, the resulting x doesn’t has requires_grad=True.

I wonder how i can propagate the requires_grad when i want to convert the tensor to type of long .

Appreciate for your help!

Gradients are only defined for floating point types and you would get an error if you try to enable it on another dtype:

x = torch.randn(1, requires_grad=True)
x.long().requires_grad_()
> RuntimeError: only Tensors of floating point dtype can require gradients

Thank you for the reply.

Indeed, the long tensor cannot have gradient. Then, i wonder if we can make the differentiable indices for the function image.index_put_(indices, values, accumulate=True) ? The indices are the pixel location, which has requires_grad and is computed from the leaf tensor. image is a zero tensors. I want to write some values for specific pixels into image. Is it possible to allow a differentiable indices in this case?

I don’t think this would be possible, as

would be disallowed, wouldn’t it?
You could probably try to come up with a custom autograd.Function, which could define (somehow) the backward and gradients for indices although I wouldn’t have a good idea if and how this could work.

Yeah, i get it. Thank you very much.