I am a beginner and I have been following tutorials and tying them out in my own jupyter notebook. I make a lot of mistakes with syntax so I change my code a lot and rerun them a lot in my notebook. Because of this I have been constantly running into Trying to backward through the graph a second time" Error because I edited my code so much. I have no idea how to fix this, the first good result calls the use of retain_graph=True within .backward() but I have tried to add that parameter and my notebook still shows the same error. How do I clear my graph so I can rerun my code block without encountering this issue?
My simple code block
x1 = torch.Tensor( [ 1, 2, 3, 4 ]) x1_var = Variable( x1, requires_grad= True ) target_y = Variable(torch.Tensor(), requires_grad=False) linear_layer1 = nn.Linear(4, 1) loss_function = nn.MSELoss() TOTAL_EPOCHS = 5 LEARNING_RATE = 0.001 optimizer = optim.SGD(linear_layer1.parameters(), lr=LEARNING_RATE) # Note we need to update linear_layer1 's weights for epochs in range(TOTAL_EPOCHS): linear_layer1.zero_grad() predicted_y = linear_layer1(x1_var) loss = loss_function(predict_y, target_y) loss.backward() optimizer.step() print("------------------------------------------") print("x1_var")
Setting .backward(retain_graph=True) does not work, I still get the same error. How do you fix this?