Which is correct way to perform cross entropy loss for 5D input?

I am performing 3D-UNet that has: input: 2x4x32x64x64 and its target is 2x32x64x64 (batch x depth x height x width). I have to compute cross entropy loss for the inputs. I found some ways to do it

First way:

    input_4d = input.view(1, 4, 64, -1)
    target_3d = target.view(1, 64, -1)
    loss = nn.NLLLoss(reduce=False)
    out_3d = loss(input_4d, target_3d)
    out = out_3d.view(1, 48, 64, 64)

Second way:

   out = input.permute(0,2,3,4,1).contiguous()
   out = out.view(-1, num_labels)
   m = nn.Softmax()
   loss = lossF.simple_dice_loss3D(m(out), target)

Third way:

    batch_len, channel, x, y, z = input.size()
    total_loss = 0
    for i in range(batch_len):
        for j in range(z):
            loss = 0
            input_z = input[i:i + 1, :, :, :, j]
            target_z = target[i, :, :, :, j]

            softmax_input_z = nn.Softmax2d()(input_z)
            logsoftmax_output_z = torch.log(softmax_input_z)

            loss = nn.NLLLoss2d()(logsoftmax_input_z, target_z)
            total_loss += loss

And fourth one

    loss = 0
    for i in range(input.size()[2]):
        print (input[:, :, i].shape)
        loss += F.cross_entropy(input[:, :, i], target[:, i])

Which way is correct?

How did you solve it?

Hi, just use normal cross entropy loss. It allows the 5D input

criterion = nn.CrossEntropyLoss().cuda()
input = torch.autograd.Variable(torch.randn((3,5)))
tgt = torch.autograd.Variable(torch.randn((3,5)))
loss = criterion(input,tgt)
1 Like