Is this required_grad unpropagation possible?

x.requires_grad
| True
x.is_leaf
| True
y = x * 2
y.requires_grad
| False
x
| tensor([[ 0.2089, -0.0671],
| [ 0.5331, -0.9865],
| [-0.4298, 0.7363],
| …,
| [ 0.2710, -0.1222],
| [-0.4925, -0.3631],
| [ 0.2648, -0.0363]], device=‘cuda:0’, requires_grad=True)

I don’t know if the posted code is what you are seeing right now or what you would like to achieve. In the latter case wrap y = x * 2 into with torch.no_grad() and y.requires_grad will show False.

Apologies, I made this post in a rush and it was unclear…
This is the code I am seeing, and it is not wrapped in a torch.no_grad() context.
Running the code itself works fine.
In the actual use case I have:

  1. the usual training loop with loss.backward()
  2. every x steps testing happens, some computations are carried out inside of a with torch.no_grad() scope
  3. after this scope exits, the code above is called (not wrapped in anything) and I get the unexpected result

I seem to have ruled out that 2. makes any difference, so I was wondering whether a prior backward pass (via tensor.backward() without any arguments, no retain_graph, …) could somehow result in the issue I am seeing.

And of course the code I posted is only “representative”, I guess I’ll have to reduce and see where exactly the issue pops up…

For reference, the real called code is a copy paste of:

up to cell [5], then calling get_drift(…) function results in log_p failng to have required_grad=True.