I got the following error when I do transfer learning using inception_v3.
IndexError: index 1 is out of bounds for dimension 1 with size 1
Here’s part of my code:
for epoch in range(num_epochs):
for i,(images,labels) in enumerate(train_loader):
outputs = model(images) ### error in this line
The shape of my input images is [10,1,299,299] (batch size is 10, grayscale image).
Am I doing wrong with input shape…?? or is it the version issue? My torchvision version is 0.2.1, and pytorch 0.4.1
inception_v3 expects a color image (3 channels).
The error message seems to be a bit strange and should be:
RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[10, 1, 299, 299] to have 3 channels, but got 1 channels instead
Did you modify the architecture somehow?
Anyway, try to repeat your single channel image or cast it directly to RGB after loading:
image = Image.open(PATH).convert('RGB')
Thank you! I just changed my images from grayscale to 3 channels and it did work!
The error message was really misleading…