RuntimeError: One of the variables needed for gradient computation has been modified by an inplace, operation

Hello everyone, I encountered the Error previously mentioned while working on a neural network project with Pytorch, here is my Module class:

class MyNeural(torch.nn.Module):
    def __init__(self , columns):
        super(MyNeural, self).__init__()
        # input/hidden/output layers
        # Nous devons préciser le nombre des couches cachées que nous allons utiliser
        self.f1 = torch.nn.Linear(columns, 32)
        self.f2 = torch.nn.Linear(32, 16)
        self.f3 = torch.nn.Linear(16, 1)
    def forward(self, x):
        x = self.f1(x)
        x = Functional.relu(x)
        x = self.f2(x)
        x = Functional.relu(x)
        x = self.f3(x)
        x = Functional.relu(x)
        return x

And here is my training method:

def training(self) :
        self.net = MyNeural(self.data.shape[1])
        self.entropyloss = torch.nn.BCELoss()
        self.optim = torch.optim.Adam(self.net.parameters(), lr=0.001)
        self.epochs=5
        for i in range(self.epochs): 
            self.net.train(mode=True)
            train_loss = 0.0
            loss = 0
            j = 0
            for  data, targets in self.train_DL:
                cprint(f"iteration : {i}" , 'blue')
                pred_targets = self.net(data)
                print("pred_targ: ",pred_targets)
                loss += self.entropyloss(pred_targets, targets)
                self.optim.zero_grad()
                loss.backward(retain_graph=True)
                self.optim.step()
                train_loss += loss.item()

You are accumulating the loss via loss += self.entropyloss(pred_targets, targets) in combination with retain_graph=True, which will keep all computation graphs alive. Changing the parameters inplace via self.optim.step() would create stale forward activations as described in this post.
Could you explain your use case a bit more and especially why you are retaining the graph?