RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time

Hi,

The problem in you case is that inside your training loop you do Yest = Yest + w*X which modifies the “preallocated” buffer that you created. The fact that you reset his value, does not remove his history and so your computational graph grows as it remembers averything.
You can solve that by just changing this for loop to use another name like Yest_local = Yest_local + w*X and set before the loop Yest_local = Yest.

That being said, a better way to solve your issue is to keep buffers as tensor and wrap them into Variables only when you need them. That way, you are sure that you will not accidentaly have your graph expanding to previous iterations.
In this case, you would define this as follow:
Before the training loop:
Yest_buffer = torch.zeros(100, 1).cuda()
For each iteration of the training loop:

Yest_buffer.zero_().add_(0.5)
Yest = V(Yest_buffer)

Keep in mind that wrapping a Tensor into a Variable is completely free. And you should always wrap your tensors as late as possible. If possible, just before the moment where you actually need the autograd.

33 Likes