Evaluation mode behavior for training-specific parameters for pytorch model

Hi All,

I have the below clarification question related to pytorch module.eval():

The document below briefly mentions the evaluation mode behavior for the Dropout and Batchnorm operators. It then advises readers to look into the details of the respective modules for the evaluation mode behavior for all other operators.

https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.eval

I am looking at the EmbeddingBag module behavior in evaluation mode, and some of the properties of this module are training-specific. For instance, the sparse parameter seems training-specific. I was wondering what the expected behavior is for this property and all other training-specific properties for this module and, in general, after the module.eval(). Do we expect training-specific properties to get flipped to eval() mode? For instance, do we expect the value of sparse to be set to false even if users set it to True during training mode?

https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html#embeddingbag

Some context for asking this question:

We are trying to use the torch-mlir repo, and for some of the pytorch models in the repo, the MLIR legalization (torch-to-linalg) pass fail if operators/modules contain training only properties after model.eval(). I noticed this behavior while running the MLR torch-to-linalg pass for the dlrm model as the EmbeddingBag module in the model includes some training-specific parameters even after module.eval(). We can fix the respective lowering from our end to handle this situation. However, I wanted to understand what the expected behavior is after model.eval(). Do we expect all the training-specific parameters to get flipped to evaluation mode? Our use case is to generate code only for evaluation mode for now, so we don’t care about training-specific parameters in the model.

Best,
Hanumanth