Inception v3 inference problems

Hello ,
I am having a problem using my own re-trained inception_v3 for real-time inference.

First off I have a question. When saving my model after training, is it enough to just save it’s state dictionary with

torch.save(model.state_dict(), ‘model.pth’)

or do I also have to save the model with it and then the state dictionary?

torch.save(model, ‘model.pth’)
torch.save(model.state_dict(), ‘model.pth’)

Second,
my issue with inference is I am trying to use the re-trained inception_v3, which i only saved and loaed the state_dictionary for, it’s giving issues when i try to use my webcam and Opencv.
This is the code:

cam=cv2.VideoCapture(0)
while True:
ret,frame = cam.read()
frame = torch.Tensor(frame)
new_frame=frame.unsqueeze(0)
incept.eval()
out=incept(new_frame.permute(0,3,1,2)) #model =model(inputs.permute(0, 3, 1, 2)) for proper channel placement
cv2.imshow(‘frame’,frame)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
cam.release()
cv2.destroyAllWindows()

I am now getting an error stating:
TypeError: Expected Ptrcv::UMat for argument ‘mat’. Is there a better way to run a pytroch model for inference over webcam feed from opencv?

Yes, saving the state_dict is sufficient to restore the model for your deployment use case.

The error message seems to be raised by OpenCV and I don’t know which method is raising it.
Are you able to read frames from your cam object using OpenCV?

Hello,
so I was able to read frames yes. I think there was an issue with my model to begin with.
Either way, I managed to fix my inference problems by following the ideas by Htut Lynn Aung on this issue: