I was trying to implement the NVDIA’s paper on self-driving cars and had the following error
RuntimeError: Calculated padded input size per channel: (2 x 18). Kernel size: (3 x 3). Kernel size can’t be greater than actual input size
and the input.size() to this layer is torch.Size([32, 3, 70, 320]).
I am trying to understand how the the the size of ([32,3,70,320]) gets converted to a 3@66*220 input layer to it
Help me out
This was happening to me in pretrained model, As above I had used small input size for inception_v3,
After i resize image to (229, 229) as mention on docs . oh It didn’t worked. Help!!
Throwing same error RuntimeError: Calculated padded input size per channel: (3 x 3). Kernel size: (5 x 5). Kernel size can't be greater than actual input size