"RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time" while using custom loss function

I don’t think error is in your loss function. I think any loss function would cause this error.

Am I right in saying that your training loop doesn’t detach or repackage the hidden state in between batches? If so, then loss.backward() is trying to back-propagate all the way through to the start of time, which works for the first batch but not for the second because the graph for the first batch has been discarded.

If I am right then there are two possible solutions.

  1. detach/repackage the hidden state in between batches. There are (at least) three ways to do this.

    1. hidden.detach_()
    2. hidden = hidden.detach()
    3. hidden = Variable(hidden.data, requires_grad=True)
  2. replace loss.backward() with loss.backward(retain_graph=True) but know that each successive batch will take more time than the previous one because it will have to back-propagate all the way through to the start of the first batch.

24 Likes