This is a solution, but why does pytorch force us to increase the memory foot print of an int\byte tensor when doing a simple nearest neighbour interpolation?
I guess the main reason is that this function is not (yet) implemented for int/byte types, since these types cannot be used for gradient calculation.
If you think this feature is not an edge case and would be used by a lot of users, we would be more than happy to accept contributions.
unit8 is the standard image format, which is why interpolation is supported on this integer format. My guess about the lack of long support would be that it might not have been requested yet (so wasn’t implemented) and would increase the binary size as a new dtype is added to the upsample methods (apparently for an edge case).
If your data isn’t overflowing in float, you could transform if before applying the interpolation.