Hi, I am trying to use my customized embedding loss for learning.
However, I have been getting the error below:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Below is my code for the loss function:

def disc_loss(output,num, x=None,y=None):
if pred == True:
#pdist = nn.PairwiseDistance(p=2)
#loss = pdist(x,y).sum()
num = x.size(1)
loss = torch.sqrt(torch.sum((x-y)**2))
return loss

x and y in the function are the embedding that I got from intermediate conv layer.
I keep getting the error message during .backward() function, but I am not sure what is wrong with my implementation. Can somebody help me with this?
Thank you!

In that case, the problem is not related to this part. Can you please share how you feed X and Y to the loss function, or if possible share the part of code corresponding to your training loop?

@omarfoq Thank you for the help!
Here is a part of my code

output,em1 = disc(real_image.float()) #Real point cloud data
_,em2 = disc(inputs2.float()) #Fake or predicted point cloud data
ls_fake = disc_loss(output, num=1,x=em1,y=em2,pred=True)
ls_fake.backward()

Here, disc is a model that I am using and em1 and em2 are embedding or features from intermediate layer. So, I basically have to compute MSELoss based on the two em1 and em2. But, I get an error at ls_fake.backward()

That’s one possibility. I think a better implementation may take advantage of using hooks. I think you can keep the original and use register_forward_hook during the forward phase to get the embedding.