Hi all!
I tried training inception architecture for imagenet as given here Imagenet training code but it throws error. Probably, problem is in the code of inception
becuase for other architecture the main.py
works as expected.
Here is the traceback -
Traceback (most recent call last):
File "main.py", line 316, in <module>
main()
File "main.py", line 158, in main
train(train_loader, model, criterion, optimizer, epoch)
File "main.py", line 195, in train
output = model(input_var)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torch/nn/modules/container.py", line 67, in forward
input = module(input)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torchvision/models/inception.py", line 109, in forward
aux = self.AuxLogits(x)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torchvision/models/inception.py", line 308, in forward
x = self.conv1(x)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torchvision/models/inception.py", line 327, in forward
x = self.conv(x)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 277, in forward
self.padding, self.dilation, self.groups)
File "/home/abhishs8/Research/Misc_Experiments/Mystuff/.torch-env/lib/python3.6/site-packages/torch/nn/functional.py", line 90, in conv2d
return f(input, weight, bias)
RuntimeError: Given input size: (128 x 3 x 3). Calculated output size: (768 x -1 x -1). Output size is too small at /pytorch/torch/lib/THNN/generic/SpatialConvolutionMM.c:45