nn.BCEWithLogitsLoss() can't accept one-hot target

Hi,
My training loop looks something like this

loss_fn = nn.BCEWithLogitsLoss()
    for epoch in range(1, num_epochs+1):
        model.train()
        for X, y in train_loader:
            X, y = X.to(device), y.to(device)
            y_hot = F.one_hot(y, num_classes)
            output = model(X)
            optimizer.zero_grad()
            loss = loss_fn(output, y_hot)
            loss.backward()
            optimizer.step()

I am transforming the target to be a one-hot vector as expected by the BCEWithLogitsLoss().
But, I am still getting the following error, pointing at the loss function:

RuntimeError: result type Float can't be cast to the desired output type Long

I can’t really understand the error message. What am I doing wrong?

2 Likes

I think the problem is that y_hot is of type long and the BCE loss expects a Tensor of type float.
Can you try converting y_hot with y_hot = y_hot.type_as(output).

17 Likes

Thanks. It worked.

But, since it’s pretty common for BCELoss to have one-hot target, wouldn’t it be better if it could also handle long type?

Hi,

The general policy we have in pytorch is that we avoid doing any copy that are hidden from the user. (cpu/gpu or different types).
Even though you could argue that this can make the user’s code more verbose. It usually allows the user to have a better control on what happens in its code.

5 Likes