The test inputs have different shape from the trained model, while I used 32 batches for training the model I use just one batch for testing

this the model config.
CNN(
(hidden_layers): Sequential(
(0): Conv2d(1, 32, kernel_size=(9, 9), stride=(1, 1), padding=(4, 4))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(9, 9), stride=(1, 1), padding=(4, 4))
(3): ReLU()
(4): Conv2d(32, 32, kernel_size=(9, 9), stride=(1, 1), padding=(4, 4))
(5): ReLU()
(6): Conv2d(32, 32, kernel_size=(9, 9), stride=(1, 1), padding=(4, 4))
(7): ReLU()
)
(output_layer): Conv2d(32, 1, kernel_size=(9, 9), stride=(1, 1), padding=(4, 4))
)

and my test inputs shape is [1,90,90]
how can I fit the test input to evaluate the model??

PyTorch models don’t have a shape dependency regarding the batch dimension, so you would be able to pass the test inputs with a batch size of 1 to the model.

1 Like