How can I apply L2/L1 loss with 3D voxels?


I have my own data with the following dimensions for super resolution tasks.
(Batch, channels, 128, 128, 128).
ex) predict = (2, 1, 128, 128, 128) / output = (2, 1, 128, 128, 128)

torch.nn.MSELoss() or torch.nn.L1Loss() can be directly applied in this voxel case?
for instance,

criterion = torch.nn.MSELoss()
pred = torch.randn(2,1,128,128,128)
y = torch.randn(2,1,128,128,128)
loss = criterion(pred, y)

Or should I make other versions of losses for 3D voxels?

+) can anybody let me know if there is any strategy how to utilize SSIM loss with 3D voxels?

You can directly apply both mentioned losses, as they would expect the model output and target to have the same shape, which is the case for your use case.
Unfortunately, I not sure how SSIM can be used for your use case, but if I’m not mistaken the original implementation uses 2D convs internally, so you might change it to 3D ones.

1 Like

Thank you so much:) it helped a lot!