Suppose I have a feature map of size 10x10, where each element is a 2D embedding, and I want to calculate the cosine embedding loss of two such tensors (2x10x10).
I tried the following approach,
loss_func = nn.CosineEmbeddingLoss()
a = Variable(torch.randn([1,2,10,10]), requires_grad=True)
b = Variable(torch.randn([1,2,10,10]), requires_grad=True)
c = Variable(torch.from_numpy(np.ones([1,10,10])), requires_grad=False)
loss = loss_func(a, b, c)
loss.backward()
however, an error popped out,
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ryli/.local/lib/python2.7/site-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/ryli/.local/lib/python2.7/site-packages/torch/autograd/__init__.py", line 99, in backward
variables, grad_variables, retain_graph)
File "/home/ryli/.local/lib/python2.7/site-packages/torch/autograd/function.py", line 91, in apply
return self._forward_cls.backward(self, *args)
File "/home/ryli/.local/lib/python2.7/site-packages/torch/autograd/function.py", line 205, in wrapper
outputs = fn(ctx, *tensor_args)
File "/home/ryli/.local/lib/python2.7/site-packages/torch/nn/_functions/loss.py", line 74, in backward
_idx = _idx.view(-1, 1).expand(gw1.size())
RuntimeError: The expanded size of the tensor (10) must match the existing size (100) at non-singleton dimension 2. at /pytorch/torch/lib/TH/generic/THTensor.c:309
If I did everything correctly, does this mean that CosineEmbeddingLoss currently does not support multidimensional embedding? If so, is there a better work-around than explicitly calculating the loss for every location and summing them up?
Thanks in advance!
System information:
- OS: Linux
- PyTorch version: 0.3.0.post4
- Python version: 2.7.12