The dropout layer from Pytorch changes the values that are not set to zero. Using Pytorch’s documentation example: (source):
import torch
import torch.nn as nn
m = nn.Dropout(p=0.5)
input = torch.ones(5, 5)
print(input)
tensor([[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]])
Then I pass it through a dropout
layer:
output = m(input)
print(output)
tensor([[0., 0., 2., 2., 0.],
[2., 0., 2., 0., 0.],
[0., 0., 0., 0., 2.],
[2., 2., 2., 2., 2.],
[2., 0., 0., 0., 2.]])
The values that aren’t set to zero are now 2. Why?