How to convert to long while maintaining the learnability

Hello,
the output of my net is a regression expressing pixels spatial offsets Delta X from the closest marker. I need to use such Delta X as index of a matrix used in the customised loss function. The problem is that if I convert such Delta X into long, the requires_grad of Delta_X is switch automatically to False, I guess because this is not a linear operation. And as such, the net does not learn. On the other hand, if I don’t converting Delta X into a long, I obviously get an error because the index of a matrix cannot be a float.

Any suggestion on how I can bypass this problem?

Thanks for helping

This sounds related to quantization; have you tried cloning your “Delta X” into a separate tensor that you convert to long for your indexing? You can use detach to keep the clone separate from the original float tensor.

if your “customized loss function” is piecewise constant, as integer indexing implies, you’ll have zeros for loss derivatives, i.e. there is no “right” direction to minimize loss if you discretize. I don’t know what’s your lookup matrix doing, but maybe you can switch to using loss multipliers, so that float Delta_X doesn’t disappear (akin to delta * table[delta.long()] and the answer above)

Thank you guys I will try and let you know