Define some layers only for training stage


Is there a way to define some layers only for the training stage?

I know there is a flag which can be used in the forward() function. However, when the layers are defined in __init__(), the is always True. So, I can’t use self.triaining in the __init__() function.

Hi, is available everywhere in your code as it is defined in nn.Module which when you extend this class to create your own, it will be copied to your model definition too. So, you can set it to false in init.

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__() = False
        self.fc = nn.Linear(1, 1)
    def forward(self, x):
        x = self.fc(x)
            x = x + 1000
        return x

model = MyModel()
x = torch.ones(1, 1)

testing = False
model(x)  # a small number such as 1.3203 = True
model(x)  # a large number around thousand


Thanks for your reply. But what I want to is :

 class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
            self.auxiliary = nn.Conv2d(in, out)

The auxiliary layer is only defined for training state.

Can you tell me why aren’t you satisfied with using .training in forward method? It is really simple to implement and obviously follows convention. I cannot think of any situation that this cannot handle.