How to track from where im getting UserWarning: floor_divide is deprecated?

/opt/conda/lib/python3.7/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at  /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.)
  return torch.floor_divide(self, other)

Getting a lot of such warnings, but without trace.

1 Like

The warning is triggered by a division with could have been executed via torch.div or the div operator (a/b), so you could check the code for the usage of those.
You could also try to convert warnings to errors via:

import warnings
warnings.filterwarnings("error")

to hopefully get a better stacktrace.

5 Likes

I am also having this warning. @ptrblck Shouldn’t this be fixed in Pytorch as the warning suggests instead of manually filtering warnings?

I don’t think the warning should be changed in PyTorch itself, but the user should decide, if she wants to fix or ignore it. My suggestion was to raise the warning to an error in order to figure out which line of code raises it so that you could fix it in your code.

Well, I’m just using tensor // 2.
I don’t see how it should be fixed.
This warning is kind of useless.

The warning provides two explicit ways to fix this depending on which behavior you expect.
Sure, if you don’t find the warning useful, feel free to ignore it. :wink:

But I want to use simple python // and I have no negative values in such tensors.
So in my case, this warning is just spam.
It also triggers for each core of dataloader multiprocessing, so it spams a lot.

In that case, since you’ve made sure you won’t be running into issue of this deprecation, filter out the warning.
Other users should still be notified, that the div op changed its behavior and should be careful about their code, as it could suddenly break in later releases and could create a debugging hell (e.g. model is not converging anymore).

I think filtering out all warnings might be a bad idea.
Added context manager or using pytorch function would make code bit less readable.
So for now i decided warning spam is a lesser evil.