Dropout isn't zero-ing out any of my data points (but it is scaling them)

When I call dropout it is not zero-ing out any of my datapoints. I have tried the layer and functional formats. I am using PyTorch 1.3.0. Here is sample code and output:

import platform; print(f"Platform {platform.platform()}")
import sys; print(f"Python {sys.version}")
import torch; print(f"PyTorch {torch.__version__}")
import torch.nn as nn
class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.do = nn.Dropout(p=.5)
        
    def forward(self, x):
        return self.do(x)
net = Net()
data = torch.Tensor([1., 2., 3., 4., 5., 6.]).view(3, 1, 2)
print(data)
net.train()
print(net(data))

Here is sample output:

So under ideal conditions there is a 1/64 chance to have a p=0.5 dropout not set anything to 0. You might just have gotten lucky and when you re-run it, you should get zeros, too.

P.S.: it’s torch.tensor with a lower-case t.

1 Like

Thanks for taking a look! Unfortunately, I didn’t get lucky :frowning: . Thanks for the tip about the lower case.

Hm. Is that on cuda only or on the CPU as well?

Did this issue get resolved? I seem to be having it as well, on PyTorch 1.2.

Yes, I was able to get this resolved.

For me, the issue went away when I spun up a new EC2 instance and used the default PyTorch 3.6 virtual environment.

So I must have messed something up when I was doing package installation including perhaps my installation of PyTorch.

FWIW here are the current details in my env (p3.2xlarge, Ubuntu 18.04, Deep Learning AMI):

Platform Linux-4.15.0-1057-aws-x86_64-with-debian-buster-sid
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56)
[GCC 7.2.0]
PyTorch 1.3.1