Does PyTorch discard nodes while exporting to ONNX format?

I created a simple model using Torch

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        
    def forward(self, x, y):
        if self.training:
            return torch.matmul(x, y)
        return torch.add(x, y)

tensor1 = torch.Tensor([4])
tensor2 = torch.Tensor([3])
model = MyModel()
torch.onnx.export(model, (tensor1,tensor2), 'train_model.onnx', input_names=["x","y"], output_names=["z"], training=2)

In this, the exported ONNX file has only one Add node and does not include the Matmul node. I require to add both the nodes in the ONNX file and be able to switch between them while training & Inference.

Thanks.

Based on the docs of torch.onnx.export it seems you have the following choices for the training argument:

training (enum , default TrainingMode.EVAL) –

  • TrainingMode.EVAL: export the model in inference mode.
  • TrainingMode.PRESERVE: export the model in inference mode if model.training is False and in training mode if model.training is True.
  • TrainingMode.TRAINING: export the model in training mode. Disables optimizations which might interfere with training.

Based on this I don’t think the exported model can switch between training and evaluation and you would need to pick one option.

1 Like

Thank you. Helped me alot.