Hey, so I have this custom loss where I have a generator, called Gen, some latent codes that are z (which is also learned), and the input data x. I wrote this loss custom loss function, below and I am getting an in-place operation error. I used suggestions for the traceback, and it is coming from the loss.backward(). I am aware you are typically not supposed to use in place operations on tensors that require gradients. I believe it coming from the fact that z requires a grad, but there is an in-place operation on it with the dummy variable. Any help is appreciated
class CustomLoss(nn.Module):
def __init__(self):
super(CustomLoss, self).__init__()
def forward(self,Gen,z,x,device):
loss_list=[]
z_tun=torch.zeros(1,z.size(1)).to(device)
for i in range(z.size(0)):
z_tun
for j in range(z.size(1)):
z=z.data.clone()
z_dummy=z[i,:]
z_tun[:,j]=z_dummy[j]
loss=torch.norm(Gen(z_tun)-x,p=2)
loss_list.append(loss)
total_loss=sum(loss_list)
total_loss/=z.numel()
return total_loss