Loss.backward - RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Hi there, I know this is a common error and there are many responses but I’ve read through them and I can’t find a fix. I’m trying to create what really should be a simple siamese network. I’ve successfully made them before. But I just can’t get around this error. I’ve gotten this error before and it usually means I tried to turn something into a numpy array in the middle of the network, but I can’t find where that’s happening here.

class ShearNet(nn.Module):
    def __init__(self):
        super(ShearNet, self).__init__()
        self.fc =  nn.Linear(100,10)
        nn.Sequential(
                            nn.Linear(100, 10),
                            nn.Linear(10, 10),
                            nn.Linear(10, 2),
                            nn.Linear(2,1)
                                     
                            )
    def forward(self, x):
        output = self.fc(x)
        return output

Before, I had this return the 2 outputs and it worked just fine. It’s only since asking the forward to find the cosine similarity that it won’t work. I think using functional inside a forward function shouldn’t cause any problems.

  def __init__(self):
    super(ShearShell, self).__init__()
    self.shear_net = shear_net
  def forward(self, x1, x2):
    output1 = self.shear_net(x1)
    output2 = self.shear_net(x2)
    cosbt = F.cosine_similarity(x1,x2, dim=2)
    output = torch.acos(cosbt)
    return output
iteration_number = 0
for epoch in range(10):
    for i, data in enumerate(train_data_loader,0):
        embed0, embed1 = trainin
        output = net(embed0, embed1)
        #loss = lossfunc(output1, output2, trainout)
        loss = criterion(output, trainout)

        #loss.requires_grad = True
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
RuntimeError                              Traceback (most recent call last)
<ipython-input-46-bfbbfb6b84c4> in <module>()
     10         #loss.requires_grad = True
     11         optimizer.zero_grad()
---> 12         loss.backward()
     13         optimizer.step()
     14         if (i %10 == 0 ):

1 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     97     Variable._execution_engine.run_backward(
     98         tensors, grad_tensors, retain_graph, create_graph,
---> 99         allow_unreachable=True)  # allow_unreachable flag
    100 
    101 

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

What am I missing??? Thank you!

F.cosine_similarity uses x1 and x2, which might not require gradients.
Based on your code I assume you wanted to use output1 and output2 instead?