Dropout isn't zero-ing out any of my data points (but it is scaling them)

When I call dropout it is not zero-ing out any of my datapoints. I have tried the layer and functional formats. I am using PyTorch 1.3.0. Here is sample code and output:

import platform; print(f"Platform {platform.platform()}")
import sys; print(f"Python {sys.version}")
import torch; print(f"PyTorch {torch.__version__}")
import torch.nn as nn
class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.do = nn.Dropout(p=.5)
        
    def forward(self, x):
        return self.do(x)
net = Net()
data = torch.Tensor([1., 2., 3., 4., 5., 6.]).view(3, 1, 2)
print(data)
net.train()
print(net(data))

Here is sample output:

So under ideal conditions there is a 1/64 chance to have a p=0.5 dropout not set anything to 0. You might just have gotten lucky and when you re-run it, you should get zeros, too.

P.S.: it’s torch.tensor with a lower-case t.

1 Like

Thanks for taking a look! Unfortunately, I didn’t get lucky :frowning: . Thanks for the tip about the lower case.

Hm. Is that on cuda only or on the CPU as well?