Torch.nn.functional.dropout doesn't do anything

I need to apply dropout to a tensor, but it appears that torch.nn.functional.dropout doesn’t actually do anything. For example, the code:

a = torch.randn(10)
b = torch.nn.functional.dropout(a, p=0.5, training=False, inplace=False)
print(a, b)

returns identical outputs for both a and b.

Is this a problem with Pytorch, or am I doing something completely wrong?

As a side-note, using inplace = True gives the error:
RuntimeError: mark_dirty only accepts input tensors, but argument 0 isn't one
(not sure if that’s at all relevant)

Dropout has no effect in evaluation mode, i.e. training=False. And you should wrap the tensor in autograd.Variable when invoking such autograd functions.

The error message is indeed confusing, I have submitted an issue on this at