Define some layers only for training stage

Hi,

Is there a way to define some layers only for the training stage?

I know there is a self.training flag which can be used in the forward() function. However, when the layers are defined in __init__(), the self.training is always True. So, I can’t use self.triaining in the __init__() function.

Hi,

self.training is available everywhere in your code as it is defined in nn.Module which when you extend this class to create your own, it will be copied to your model definition too. So, you can set it to false in init.

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.training = False
        self.fc = nn.Linear(1, 1)
        
    def forward(self, x):
        x = self.fc(x)
        if self.training:
            x = x + 1000
        return x

model = MyModel()
x = torch.ones(1, 1)

testing

model.training = False
model(x)  # a small number such as 1.3203
model.training = True
model(x)  # a large number around thousand

Bests

Thanks for your reply. But what I want to is :

 class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        if self.training:
            self.auxiliary = nn.Conv2d(in, out)

The auxiliary layer is only defined for training state.

Can you tell me why aren’t you satisfied with using .training in forward method? It is really simple to implement and obviously follows convention. I cannot think of any situation that this cannot handle.