How to implement nn.Module.forward for both train and eval mode?

I am trying to embed the loss calculation in the model itself, rather than attaching the module at every iteration.
This requires the forward function to have another argument in the signature
forward(self, x, y), in order to have the info to calculate the loss.

What is the correct way to handle this for both train and eval mode?

Is defaulting y=None the recommended way of doing so?

I wasn’t able to find any documentation on it, any reference would be appreciated!

The “.train()” and “.eval()” mode are related to certain modules like dropout, batchnorms, etc which have a different functionality in the two modes. However, They do not change the signature of the function call of the model.
To embed the loss calculation in the model itself, the defaulting to y=None should work. There is no recommended way to do this. You could follow the template in this repository.

Thanks Mazhar_Shaikh, I chose that way, as it was the simplest implementation.

Just to give another example, the torchvision.models.detection.Generalized_RCNN model uses the same pattern, and complements it with a further check for model.training at line 44.

1 Like