Memory leaks from custom function

Gotcha. In the process of converting that. Now I am getting the error “RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [32, 10]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Uncaught exception. Entering post mortem debugging” Which I imagine is because that is exactly what I am doing. Is there a way to turn off this error or get around it? The value I need to use is not calculated until after the forward pass. It also does not require a gradient if that helps.
Edit: one of my other ones does require a gradient so that does not help. I do need to entirely modify one of these saved tensors, i.e. I need ctx.save_for_backwards to just save a pointer and not the actual tensor so I can change it later. (How to transition to functions not being allowed to have member variables) was working, just it is causing memory leaks that I am now seeing with a larger network.