Loss backward throws RuntimeError

Hello,

I have a problem that might have a very simple solution but it bugs me for a few hours now.

I have several models defined as:

class Model(nn.Module):
	def __init__(self, base):
		super(Model, self).__init__()
		self.base = base
		self.coeff = nn.Parameter(torch.randn(len(base)), requires_grad=True)

	def forward(self, x):
		res = []
		for x_i in x:
			res.append(torch.tensor([c*f(x_i) for c, f in zip(self.coeff, self.base)], requires_grad=True).sum())
		return torch.stack(res)

I use a custom loss function, defined as:

class RegressionLoss(nn.Module):
	def __init__(self):
		super(RegressionLoss, self).__init__()

	def forward(self, a, p, y_, y, q, w):
		return (w * a * p * torch.abs(y_ - y) ** q).mean()

All the models and the loss modules sit in seperate lists, i.e. I have 2 lists names models[] and loss_fn[]. I also have a list of LBFGS optimizers, named optimizers[].

I call the following function in a loop:

	def lbfs_KHM(self, k, a_values, p_values, x, y, w, q=2):
		def closure():
			self.optimizers[k].zero_grad()
			output = self.models[k](x)
			loss = self.loss_fn[k](a_values, p_values, output, y, q, w)
			loss.backward()
			return loss

		self.optimizers[k].step(closure)

Altough I assumed this will create different graphs for different models and losses, I get the following error when calling loss.backward:

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

Well, apparently it seems like the problem occured because I used the models for calculating the tensors a_values and p_values. Calling .detach() on both solved the problem!

1 Like