Unable to classify image with trained torch model

I have trained the model with Autokeras and saved the mode as shown below.
I loaded model and passed the image (64x64x3) , but it seems that how I have to convert this 3D image into 4D tensor required by model for inference.

model=clf.cnn.best_model.produce_model()
model=torch.load(“final.h5”)
model.eval()
img=cv2.imread(“test.png”)
model.forward(torch.tensor(img))


RuntimeError Traceback (most recent call last)
in ()
----> 1 model.forward(torch.tensor(img))

/usr/local/lib/python3.6/dist-packages/autokeras/backend/torch/model.py in forward(self, input_tensor)
49 else:
50 edge_input_tensor = node_list[u]
—> 51 temp_tensor = torch_layer(edge_input_tensor)
52 node_list[v] = temp_tensor
53 return node_list[output_id]

/usr/local/lib/python3.6/dist-packages/torch-1.0.1.post2-py3.6-linux-x86_64.egg/torch/nn/modules/module.py in call(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
–> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch-1.0.1.post2-py3.6-linux-x86_64.egg/torch/nn/modules/conv.py in forward(self, input)
318 def forward(self, input):
319 return F.conv2d(input, self.weight, self.bias, self.stride,
–> 320 self.padding, self.dilation, self.groups)
321
322

RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 3, 3], but got 3-dimensional input of size [64, 64, 3] instead