RuntimeError while using CosineEmbeddingLoss

There is a bug in CosineEmbeddingLoss in pytorch 0.3.0, i checked and see this problem is solved in newer versions but i cannot switch to it right not.
In the following code i get an error:

import torch.nn as nn
import torch
from torch.autograd import Variable
import torch.nn.functional as F


input1 = Variable(torch.rand(1,3,10,10)).cuda()
input2 = Variable(torch.rand(1,3,10,10)).cuda()
net  = nn.Conv2d(3,3, 1).cuda()
input1 = net(input1)

y = Variable(torch.ones(input1.size(0),1,input1.size(2),input1.size(3))).cuda()

loss = torch.nn.CosineEmbeddingLoss()
output = loss(input1, input2, y)
print(output)
output.backward()

RuntimeError: The expanded size of the tensor (10) must match the existing size (100) at non-singleton dimension 2. at /opt/conda/conda-bld/pytorch_1512387374934/work/torch/lib/THC/generic/THCTensor.c:323

so im trying to compute the similarity loss in another way.
does the following loss function make any sense?


input1 = Variable(torch.rand(1,3,10,10)).cuda()
input2 = Variable(torch.rand(1,3,10,10)).cuda()
net  = nn.Conv2d(3,3, 1).cuda()
input1 = net(input1)

y_L1 = Variable(torch.ones(input1.size(2),input1.size(3))).cuda()

Cosine_Similarity = F.cosine_similarity(input1, input2)
loss = nn.L1Loss()
output = loss(Cosine_Similarity, y_L1)


print(output)
output.backward()