I am performing 3D-UNet that has: input: 2x4x32x64x64
and its target is 2x32x64x64
(batch x depth x height x width)
. I have to compute cross entropy loss for the inputs. I found some ways to do it
First way:
input_4d = input.view(1, 4, 64, -1)
target_3d = target.view(1, 64, -1)
loss = nn.NLLLoss(reduce=False)
out_3d = loss(input_4d, target_3d)
out = out_3d.view(1, 48, 64, 64)
Second way:
out = input.permute(0,2,3,4,1).contiguous()
out = out.view(-1, num_labels)
m = nn.Softmax()
loss = lossF.simple_dice_loss3D(m(out), target)
Third way:
batch_len, channel, x, y, z = input.size()
total_loss = 0
for i in range(batch_len):
for j in range(z):
loss = 0
input_z = input[i:i + 1, :, :, :, j]
target_z = target[i, :, :, :, j]
softmax_input_z = nn.Softmax2d()(input_z)
logsoftmax_output_z = torch.log(softmax_input_z)
loss = nn.NLLLoss2d()(logsoftmax_input_z, target_z)
total_loss += loss
And fourth one
loss = 0
for i in range(input.size()[2]):
print (input[:, :, i].shape)
loss += F.cross_entropy(input[:, :, i], target[:, i])
Which way is correct?