How does model.eval() affect densenet dropout

Hey Community,

lets say i want to change the dropout in densenet in the last layers, so i choose layer 10 and 11 and update the respective dropout like shown. As you can see, the dropout remains 0.2 even after model.eval() is called.

Why is that the case?
Is there a more appealing way to change the dropout in the respective layers and turn in on/off when in training/evaluation mode?
Or do I need to run the change_dropout function each time i change between train and evaluation mode?

The output is pretty long, but it shows that layer.drop_rate remains 0.2 in layers 10 and 11 after model.eval() was called.

Output:

###### BEFORE CHANGES #######
train: 0.0
train: 0.0
... cut to the last to lines ...
train: 0.0
train: 0.0

eval: 0.0
eval: 0.0
... cut to the last to lines ...
eval: 0.0
eval: 0.0

###### AFTERCHANGES #######
train: 0.0
train: 0.0
... cut to the last to lines ...
train: 0.2
train: 0.2

eval: 0.0
eval: 0.0
... cut to the last to lines ...
eval: 0.2
eval: 0.2

Code: (should run on your machine)

from torchvision import models
def print_dropouts(model, phase):
    for feature in model.features:
        if type(feature) == models.densenet._DenseBlock:
            for layer in feature.modules():
                if type(layer) == models.densenet._DenseLayer:
                    print(f"{phase}: ", layer.drop_rate)

def change_dropout(model, layers_to_update, drop_rate):
    for idx, feature in enumerate(model.features):
        if (idx in layers_to_update) and (type(feature) == models.densenet._DenseBlock):
            for layer in feature.modules():
                if type(layer) == models.densenet._DenseLayer:
                    layer.drop_rate = drop_rate
# 
if __name__ == "__main__":
    model = models.desnenet121(pretrained=True)
    
    print("###### BEFORE CHANGES #######")
    model.train()
    print_dropouts(model, "train")
    model.eval()
    print_dropouts(model, "eval")

    # change happens here
    change_dropout(model, [10, 11], 0.2)

    print("###### AFTER CHANGES #######")
    model.train()
    print_dropouts(model, "train")
    model.eval()
    print_dropouts(model, "eval")

Cheers

self.drop_rate is an attribute of the _DenseLayer module as seen here.
The value won’t change by calling model.train() or model.eval().
Instead the functional dropout call in this line of code will use the self.training attribute to decide if dropout should be used or not.