"RuntimeError: expected a Variable argument, but got FloatTensor" for loss function input

Greetings,
I’m currently having a problem I have trouble to solve.
I’m developing a CNN for text classification using pytorch but encounter this problem as I want to train my model :

loss, optimizer = createLossAndOptimizer(net, lr)

for epoch in range(num_epoch):

running_loss = 0.0
for i, data in enumerate(trainloader, 0):
 
    inputs, labels = data
  
    optimizer.zero_grad()

  
    outputs = net(inputs)
    loss = loss(outputs, labels)
    loss.backward()
    optimizer.step()

RuntimeError Traceback (most recent call last)
in ()
14 print(outputs)
15 print(labels)
—> 16 loss = loss(outputs, labels)
17 loss.backward()
18 optimizer.step()

/opt/conda/lib/python3.5/site-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average)
469 for each minibatch.
470 “”"
–> 471 return _functions.thnn.BCELoss(size_average, weight=weight)(input, target)
472
473

RuntimeError: expected a Variable argument, but got FloatTensor

both my outputs and my labels are a [torch.FloatTensor of size 64x1] type and dimension, which doesn’t seem to be working. I tried labels = Variable(labels) but it doesn’t work.
Can someone help me ?
Thank you very much.