Fastai problem/Input tensor issue

I created a model using fastai and save into torchscript.
When I run torchscript model it wrong input
My model: resnet50
then error:
Expected 4-dimensional input for 4-dimensional weight 64 3 7 7, but got 3-dimensional input of size [39, 28, 3] instead

import torch
example = torch.rand(1, 3, 64, 64)
traced_script_module = torch.jit.trace(torch_model, example)“”)
img = cv2.imread(‘image.jpg’)

Have anybody help me?

Is the result of torch.from_numpy(img) the same dimensions as your example inputs? When using tracing, you must make sure that the example inputs given are representative.