I run the same input through a set sequential modules and get different results.
net = nn.Sequential( nn.Conv2d(50, 30, kernel_size=1, padding=1, bias=False), nn.BatchNorm2d(30,
momentum=.95), nn.ReLU(inplace=True), nn.Dropout(0.1), nn.Conv2d(30, 19, kernel_size=1))
x = torch.rand(1, 50, 12, 12)
y = net(Variable(x))
modules = [m for m in net.children()]
w = Variable(x)
for m in modules:
w = m(w)
torch.equal(w.data, y.data) #False
Any ideas where it is going wrong?
I am using Python 2.7 and torch version is 0.2.0_3
nn.Dropout uses it’s probability p to drop activations randomly, so it’s expected behavior to get different results using this layer.
Have a look at this small example resulting in a new drop mask for every run:
drop = nn.Dropout(p=0.5)
x = torch.randn(1, 10)
for _ in range(10):
output = drop(x)
print(output)