Pytorch model convert to onnx

import torch
from transformers import BertModel, BertForMaskedLM

def convert_onnx():
model_path = ‘…/bertLM_model_pytorch/pytorch_model.pt’
dummy_input0 = torch.LongTensor(16, 128).to(torch.device(“cuda”))
dummy_input1 = torch.LongTensor(16, 128).to(torch.device(“cuda”))
dummy_input2 = torch.LongTensor(16, 128).to(torch.device(“cuda”))
dynamic_axes = {
‘dummy_input0’: {0: ‘batch_size’, 1: ‘seq_length’},
'dummy_input1 ': {0: ‘batch_size’, 1: ‘seq_length’},
‘dummy_input2’: {0: ‘batch_size’, 1: ‘seq_length’},
‘outputs’: {0: ‘batch_size’, 1: ‘seq_length’},
}
model = torch.load(model_path)
model.load_state_dict(loaded_model[‘state_dict’])
dummy_input =(dummy_input0, dummy_input1, dummy_input2)
onnx_path = ‘./pytorch_model.onnx’
torch.onnx.export(model,dummy_input, onnx_path,input_names=[‘dummy_input0’, 'dummy_input1 ',‘dummy_input2’], output_names=[‘outputs’],dynamic_axes=dynamic_axes)
print(‘convert retinaface to onnx finish!!!’)

I am trying to convert the pytorch model to onnx, and an error is reported after the above code runs:
AttributeError: ‘collections.OrderedDict’ object has no attribute ‘load_state_dict’

I want to know what is the cause of the error, thank you! :grin:

This line of code loads a state_dict not a model object:

model = torch.load(model_path)

which is why the following call fails:

model.load_state_dict(loaded_model[‘state_dict’])
> AttributeError: ‘collections.OrderedDict’ object has no attribute ‘load_state_dict’

To properly load the model, create the model instance first and load the state_dict afterwards:

model = MyModel()
model.load_state_dict(torch.load(model_path))

Thank you for your reply, I rewritten the code according to your prompt:

model=BertForMaskedLM.from_pretrained(model_path)
model.load_state_dict(torch.load(model_path))

And now it prompts a new error:
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0x80 in position 0: invalid start byte
What is the reason for this problem, I am a beginner in deep learning, thank you for your reply! :smiley:

I guess the error is caused in:

model=BertForMaskedLM.from_pretrained(model_path)

which most likely does not expect a path to a state_dict, so check the docs for this method and make sure to pass the right arguments to it.

Thank you, exactly as you said.
It’s a coincidence, I see that your business card is from Nvidia. My purpose of converting to onnx is to optimize the inference speed of the model based on onnx->TensorRT :grin: