RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation for a loss function

Hey, so I have this custom loss where I have a generator, called Gen, some latent codes that are z (which is also learned), and the input data x. I wrote this loss custom loss function, below and I am getting an in-place operation error. I used suggestions for the traceback, and it is coming from the loss.backward(). I am aware you are typically not supposed to use in place operations on tensors that require gradients. I believe it coming from the fact that z requires a grad, but there is an in-place operation on it with the dummy variable. Any help is appreciated

class CustomLoss(nn.Module):
    def __init__(self):
        super(CustomLoss, self).__init__()
    def forward(self,Gen,z,x,device):
        loss_list=[]
        z_tun=torch.zeros(1,z.size(1)).to(device)
        for i in range(z.size(0)):
            z_tun
            for j in range(z.size(1)):
                z=z.data.clone()
                z_dummy=z[i,:]
                z_tun[:,j]=z_dummy[j]
                loss=torch.norm(Gen(z_tun)-x,p=2)
                loss_list.append(loss)
        total_loss=sum(loss_list)
        total_loss/=z.numel()
        return total_loss

Could you remove the .data usage and check, if you are still getting the error?
Using .data is not recommended and might yield unwanted side effects.

Hey thanks for the response, no the error still exist. I am having output 0 of Viewbackward. I am not sure how to fix that. I believe I need to clone z correctly.

Check, if the inplace division of total_loss might cause this issue by using the out-of-place operation.
If not, use .clone() to narrow down the particular line of code.