I am trying to implement a U-NET. I stumbled upon a problem using MaxUnpool2d. The solution is not unique, hence, I get different solutions each time I run it.
I reproduced the error here:
import torch
import torch.nn as nn
x = torch.load("x.pt")
indices = torch.load("indices.pt")
unpool = nn.MaxUnpool2d(kernel_size=(3, 3), stride=(1, 1), padding=(0, 0))
###############################################
output_1 = unpool(x,indices)
output_2 = unpool(x,indices)
############################################
print("Sum of Output_1:")
print(sum(sum(sum(sum(output_1)))))
print("---------------------")
print("Sum of Output_2:")
print(sum(sum(sum(sum(output_2)))))
In the case where the maxpool2d kernels in the forward overlap, the inpooling can be ambiguous as two values need to be written at the same place.
This function is non-deterministic in this case. You can read more about this here.
Yes the seeding won’t change anything for the determinism.
Unfortunately, the only options I can see here is to change the pooling to avoid overlapping kernel.
Hi,
Thank you for your help.
So, when I understand you correctly, The output maxpool2d has overlapping indices, and when I do unpool, these lead to different results.
Yes it can.
When during the pooling, a value was used twice (because kernels overlap).
In the unpooling layer, you want to write two different values back at that spot.
And for speed reasons, we don’t check all the Tensors for overlaps and so you might end up with one value or the other depending on the runs.