Process hang when calling backward in backward hooks?

I test if it is allowed to update another module in a backward hook with the following code:

import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.optim import Adam

x = Variable(torch.randn(3, 4), requires_grad=True)
x2 = Variable(torch.randn(3, 4), requires_grad=True)
y = nn.Linear(4, 5)

a = (3 * x).sum()
b = y(x2)

oa = Adam([x])
ob = Adam(y.parameters())

print('Before update', y)

def update(grad):
    loss = b.sum()
    loss.backward()
    ob.step()
    print('After update', y)

a.register_hook(update)
a.backward(retain_variables=True)

As you can see, what update function does is irrelavent to Variable a and x, but I found out that the process hung when calling loss.backward(). Is it allowed to calling backward in backward hooks?

Hi,

This is the same issue as this one: https://github.com/pytorch/pytorch/issues/1776
Unfortunately, this is not possible to use .backward() during another .backward() call (which is when your hook is called).

OK. I think it is the limitation of this library. Thanks for your help.

BTW, on master, autograd is now reentrant.

1 Like

Hi, there,
I use the version 0.4, but I still cannot call backward() in backward_hook function. If I do, It will go into a endless loop. Does someone know the state of the issue? example script will be appreciated, thanks a lot!