Floating Point Exception in Dropout Layer

Dropout Layers produce a Floating Point Exception when they are passed a LongTensor.
Use the following code to reproduce

>>> import torch
>>> a = torch.Tensor([1, 2, 3])
>>> d = torch.nn.Dropout()
>>> d(a)
tensor([2., 4., 6.])
>>> a = a.long()
>>> d(a.float())
tensor([0., 0., 0.])
>>> d(a)
Floating point exception

I kept getting Floating point exceptions even though the input Tensors were always of type float. I resolved the issue for me by changing every call to a dropout layer from dropout(x) to dropout(x.float()), however I don’t understand why this was necessary.