Hi everyone,
I have a network architecture with Dropout layers.
From the above official reference:
During training, randomly zeroes some of the elements of the input tensor with probability
p
using samples from a Bernoulli distribution.
I need to fix the same configuration for multiples consecutive N steps; when I say fix the same configuration I mean:
 at step 0, randomly zeroes some of the elements of the input tensor with probability
p
using samples from a Bernoulli distribution.  In the following N steps, zeroes the same input’s elements chosen in step 0 (so the network seen by the system for each of these steps is exactly the same)
 At step N+1 we restart from the beginning (randomly zeroes some of the elements of the input tensor with probability
p
using samples from a Bernoulli distribution.)  repeat
How can I do this?
[EDIT]
There are some example of problem similar to the above (example_1, Example_2, Example_3).
Anyhow I still have some doubt about how to proceed:
 In my network I have something like this:
cfg = {
'VGG16': [64, 'Dp', 64, 'M', 128, 'Dp', 128, 'M', 256, 'Dp', 256, 'Dp', 256, 'M', 512,'Dp', 512,'Dp', 512, 'M', 512,'Dp', 512,'Dp', 512, 'A', 'Dp'], #dropouts dependent from a single parameter (useful for hyperpar optim.)
}
class VGG(nn.Module, NetVariables, OrthoInit):
def __init__(self, params):
self.params = params.copy()
nn.Module.__init__(self)
NetVariables.__init__(self, self.params)
OrthoInit.__init__(self)
self.features = self._make_layers(cfg['VGG16'])
self.classifier = nn.Linear(512, self.num_classes)
self.weights_init() #call the orthogonal initial condition
def forward(self, x):
outs = {}
L2 = self.features(x)
outs['l2'] = L2
Out = L2.view(L2.size(0), 1)
Out = self.classifier(Out)
outs['out'] = Out
return outs
def _make_layers(self, cfg):
layers = []
in_channels = 3
for x in cfg:
if x == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
elif x=='A':
layers += [nn.AvgPool2d(kernel_size=2, stride=2)]
elif x == 'D3':
layers += [nn.Dropout(0.3)]
elif x == 'D4':
layers += [nn.Dropout(0.4)]
elif x == 'D5':
layers += [nn.Dropout(0.5)]
elif x == 'Dp':
layers += [nn.Dropout(self.params['dropout_p'])]
else:
layers += [nn.Conv2d(in_channels, x, kernel_size=3, padding=1),
nn.Tanh()
,nn.GroupNorm(int(x/self.params['group_factor']), x)
]
in_channels = x
layers += [nn.AvgPool2d(kernel_size=1, stride=1)]
return nn.Sequential(*layers)
How can I adapt the code in such a way:

Substitute Dropout in
nn.Sequential
by a layer that isnn.Identity
whenmodel.training()==False
(evaluation mode) and is a mask layer whenmodel.training()==False
(training mode) 
the mask mentioned in the point 1. keeps constant and changes only when triggered by a given external input flag