RuntimeError: result type Double can't be cast to the desired output type Long

I want to compute BCEWithLogitsLoss(). I am using the following code:

for epoch in range(max_epochs):
        total_epoch_loss = 0
        for batch_idx, (inps, tgts) in enumerate(train_loader):
            tgts = tgts.reshape((-1)).to(device)
            tgts = tgts.to(torch.long)
            out = cln(inps)
            inpOut = torch.cat((inps, out), dim=1)
            fOut = util.continuous_xor_vectorized(inpOut.T, name)
            loss = criterion(fOut, tgts)
            total_epoch_loss += loss
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

where fOut and tgts are both 32 x 1 sized vectors. But getting the following error message:

Traceback (most recent call last):
  File "trainClassification2ndForm.py", line 52, in <module>
    cln, lossess = train_classifier(train_loader, loss_fn)
  File "trainClassification2ndForm.py", line 35, in train_classifier
    loss = criterion(fOut, tgts)
  File "/home/ravi/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/ravi/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 632, in forward
    reduction=self.reduction)
  File "/home/ravi/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2582, in binary_cross_entropy_with_logits
    return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)
RuntimeError: result type Double can't be cast to the desired output type Long

If I change dtype of fOut to long, then it throws following error:

Traceback (most recent call last):
  File "trainClassification2ndForm.py", line 51, in <module>
    cln, lossess = train_classifier(train_loader, loss_fn)
  File "trainClassification2ndForm.py", line 35, in train_classifier
    loss = criterion(fOut, tgts)
  File "/home/ravi/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/ravi/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 632, in forward
    reduction=self.reduction)
  File "/home/ravi/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2582, in binary_cross_entropy_with_logits
    return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)
RuntimeError: exp_vml_cpu not implemented for 'Long'

I am unable to find out the mistake. Can anyone please help here?

Thanks!

@ptrblck can you help please?

Why are you using a threshold (producing a binary output) when the loss function is intended to be used with logits?

Sorry, that should be commented.

I ran again without thresholding. It gives the same error:

RuntimeError: result type Double can't be cast to the desired output type Long

Also for clarity, the function util.continuous_xor_vectorized(inpOut.T, name) computes continuous xor amongst the tensors using tnorms (Boolean Fuzzy Logic). This essentially is torch.multiplication operation.

Below is the function definition for reference.

def continuous_xor_vectorized(self, inp_vars, name):
        op1 = inp_vars[0, :]
        # print(inp_vars.shape[0])
        for i in range(inp_vars.shape[0]-1):
            op2 = inp_vars[i+1, :]
            t = self.tnorm_vectorized(1-op1, op2, name)
            u = self.tnorm_vectorized(op1, 1-op2, name)
            res = 1-self.tnorm_vectorized(1-t, 1-u, name)
            op1 = res
        return res
def tnorm_vectorized(self, t, u, name):
        if name == "luka":
            return max(0, t + u - 1)
        elif name == "godel":
            return torch.minimum(t, u)
        elif name == "product":
            return torch.multiply(t, u)
        else:
            print("Wrong Name!")

OK, I think the issue is that the targets are supposed to be of float dtype rather than long here. Does tgts.to(torch.float) work?

Hey, it worked now! Thanks.

I don’t know why but when I tried this earlier, it broke the loss propagation.