Keep running in to "Trying to backward through the graph a second time" Error as a beginner

I am a beginner and I have been following tutorials and tying them out in my own jupyter notebook. I make a lot of mistakes with syntax so I change my code a lot and rerun them a lot in my notebook. Because of this I have been constantly running into Trying to backward through the graph a second time" Error because I edited my code so much. I have no idea how to fix this, the first good result calls the use of retain_graph=True within .backward() but I have tried to add that parameter and my notebook still shows the same error. How do I clear my graph so I can rerun my code block without encountering this issue?

My simple code block

x1 = torch.Tensor( [ 1, 2, 3, 4 ])
x1_var = Variable( x1, requires_grad= True )
target_y = Variable(torch.Tensor([0]), requires_grad=False)

linear_layer1 = nn.Linear(4, 1)
loss_function = nn.MSELoss()
optimizer = optim.SGD(linear_layer1.parameters(), lr=LEARNING_RATE) 
# Note we need to update linear_layer1 's weights

for epochs in range(TOTAL_EPOCHS):
    predicted_y = linear_layer1(x1_var)
    loss = loss_function(predict_y, target_y)

Setting .backward(retain_graph=True) does not work, I still get the same error. How do you fix this?

When creating Variable (i.e. x1_var and target_y in your code), you do not need requires_grad=True since you most likely want to train the parameters of the model but you will not modify the input.
Also, this must be a type, but you call loss_function(predict_y, target_y) where it should be loss_function(predicted_y, target_y).
These are the two things I see for now

Thank I think it might of been the typo that kept causing the error.

maybe try loss.backward(retain_graph=True)