Why by adding second loss and applying the back.ward(), generated gradient is as same as before?

You are detaching the computation graph by using numpy functions for loss:

hist_rGau= np.histogram(Gaussy.squeeze(1).view(-1).detach().numpy(), bins=FFBins,range=[0 ,1])
count_r11 = hist_rGau[0]      
PGAUSSY=count_r11/(count_r11.sum())        
hist_rGAN = np.histogram(fake.squeeze(1).view(-1).detach().numpy(), bins=FFBins,range=[0 ,1])
count_r22 = hist_rGAN[0]       
PGAN=count_r22/(count_r22.sum())
        
loss2=abs(PGAUSSY-PGAN).sum()

If you need to use numpy operations, you would need to implement the backward function manually via a custom autograd.Function or use PyTorch operations.

Double post from:
here, here, here, here, here.
Please don’t repost the same question in 6 different threads, as this will only waste the time of community members, which might be looking into your issue.

2 Likes