Dynamically Block Backprop through Variable

Hi Pytorchurers, I promise this will be the last of my many questions today.

Is there a way to dynamically block backprop through a variable? By dynamically I mean after the variable has been created, make it forget its parent variables.

My application is that I want to implement truncated BPTT, and stop backprop through the hidden state from T steps ago. Each hidden state has several backwards passes from the losses at various future time-steps running through it (I use loss.backwards(retain_graph = True).

Things I’ve tried:

  • Setting var.requires_grad = False - but pytorch raises an error when you try that.
  • Using detached_var = var.detach(), but that requires me blocking backprop at the time of creating the variable. I can’t have this, because I still want to backprop through that variable several times.

I need something like var.detach_inplace(). I would be happy with any hack that allows me to do this.

1 Like

you could create your own autograd.Function that does this. It should be < 10 lines tops.


Did you succeed on this? It would be great to see your code.