can anyone help me with this loss error.
why i am getting the following error when i use CosineEmbeddingLoss
loss
example:
import torch
import torch.nn.functional as F
from torch.autograd import Variable
a = torch.rand(1,1,10,10)
b = torch.rand(1,1,10,10)
c = torch.ones(1,1,10,10)
c.requires_grad = False
l = torch.nn.CosineEmbeddingLoss()
output = l(a, b, c)
print(output)
Variable containing:
1.00000e-07 *
7.7486
[torch.FloatTensor of size 1]
output.backward()
RuntimeError: there are no graph nodes that require computing gradients
Im using pytorch 0.3.0
I have two follow up questions as well:
why i cannot compute the loss if i have:
a = torch.rand(1,2,10,10)
b = torch.rand(1,2,10,10)
c = torch.ones(1,2,10,10)
c.requires_grad = False
l = torch.nn.CosineEmbeddingLoss()
output = l(a, b, c)
print(output)
RuntimeError: inconsistent tensor size, expected src [1 x 10 x 10] and mask [1 x 2 x 10 x 10] to have the same number of elements, but got 100 and 200 elements respectively at /opt/conda/conda-bld/pytorch_1512387374934/work/torch/lib/TH/generic/THTensorMath.c:197
and why i can do it when i have:
a = torch.rand(2,1,10,10)
b = torch.rand(2,1,10,10)
c = torch.ones(2,1,10,10)
c.requires_grad = False
l = torch.nn.CosineEmbeddingLoss()
output = l(a, b, c)
print(output)
1.00000e-07 *
8.9407
[torch.FloatTensor of size 1]
in the last case i still get the
RuntimeError: there are no graph nodes that require computing gradients
error when i do the backward pass though