I am using U-Net with Conv3D and ConvTranspose3D over vectors of size (3,100,100,100). My input size is (32,3,100,100,100) and target size is the same as well. I am having trouble comparing predictions and the output. I can’t understand how to reshape them properly to compare. The code I am using is as follows:
Format : [N , C, D , W , H ]
Input_shape : [32, 3, 100, 100, 100]
Label_shape : [32, 3, 100, 100, 100]
Output_shape : [32, 3, 100, 100, 100]
Optimizer : Adam
Loss and Train:
def dice_loss(pred, target, smooth = 1.):
pred = pred.contiguous()
target = target.contiguous()
intersection = (pred * target).sum(dim=2).sum(dim=2)
loss = (1 - ((2. * intersection + smooth) / (pred.sum(dim=2).sum(dim=2) +
target.sum(dim=2).sum(dim=2) + smooth)))
return loss.mean()
def train():
unet.train(True)
torch.set_grad_enabled(True)
running_loss = 0.0
running_corrects = 0
num_samples = 0
for data in train_dataloader:
inputs, labels = data
inputs = inputs.cuda(cuda3)
labels = labels.cuda(cuda3)
optimizer.zero_grad()
outputs = unet (inputs)
_,preds = torch.max(outputs, 1)
loss = dice_loss(outputs, labels)
loss.backward()
optimizer.step()
num_samples += inputs.size(0)
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.long().data)
epoch_loss = running_loss / num_samples
epochs_acc = running_corrects / num_samples
return epoch_loss, epochs_acc