Issue in running loss.backward()

i am clustering the target image into k clusters and i want pixels of my predicted image to learn those cluster values.
for that i am running loop for each cluster in range(k), and then i am computing mse loss for all those pixels that belongs to same cluster. since i am using a batch size of 8 images so it is computed across whole batch for each cluster.
but then i am running into the following issue:

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

whereas i just want all the losses obtained from each cluster to get accumulated and then run backprop.
kindly help me in this issue.

When calling loss.backward() you are implicitely erasing the computation graph that was built when you made a prediction with your model :

Y_pred = model(X_batch, embeds)

Thus inside the inner loop where you call backward() n_clusters time, you ask autograd to backward through a graph that was freed after the first inner loop execution. As suggested, you can try setting ‘retain_graph=True’ inside backward to keep the graph but I’m not sure this will work as you are also repetitively computing the loss. The simplest solution might be to make the prediction inside the inner loop.

Hi @Ayush_Sarangi_4-Year,

Also, when sharing code please copy and paste the code and wrapped it in three ``` symbols, so that the code is properly formatted like @ElLoboLoco did above with,