UPDATED: Clarifying the question and providing my code.
I have a dataset of RGB images as well as the corresponding Alpha map images (image mode=P). I would like to import and concatenate the alpha maps with the RGB images to create a 4-channel RGBA input for the network. Currently, my code is creating a 6-channel input. trimap_alpha
has 3 channels instead of the intended 1. How can I do this properly?
I am creating and loading datasets:
data_dir = 'data’
image_datasets = {}
for label in [‘input_training_lowres’, ‘gt_training_lowres’, ‘trimap_training_lowres’]:
image_datasets[label] = datasets.ImageFolder(
os.path.join(data_dir, label),
data_transforms[label]
)
dataloaders = {}
for label in [‘input_training_lowres’, ‘gt_training_lowres’, ‘trimap_training_lowres’]:
dataloaders[label] = torch.utils.data.DataLoader(
image_datasets[label], batch_size=4, shuffle=True, num_workers=4
)
dataset_sizes = {}
for label in [‘input_training_lowres’, ‘gt_training_lowres’, ‘trimap_training_lowres’]:
dataset_sizes[label] = len(image_datasets[label])
and iterating over the data:
for input_training, truth_training, trimap_training in zip(dataloaders['input_training_lowres'], dataloaders['gt_training_lowres'], dataloaders['trimap_training_lowres']):
rgb_data, target = input_training
gt_data, gt_target = truth_training
trimap_data, trimap_target = trimap_training
trimap_alpha = trimap_data
ground_truth = gt_data
inputs = torch.cat((rgb_data, trimap_alpha), 1)
Traceback:
Traceback (most recent call last):
File "D:/MachineLearning/Automatter/Automatter/smallEDNetwork6_val.py", line 258, in <module>
model_ft = train_model(model_ft, optimizer_ft, 1)
File "D:/MachineLearning/Automatter/Automatter/smallEDNetwork6_val.py", line 175, in train_model
predicted_truth = model(inputs)
File "C:\Users\Nic\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "D:/MachineLearning/Automatter/Automatter/smallEDNetwork6_val.py", line 46, in forward
out1 = F.relu(self.conv1(x))
File "C:\Users\Nic\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 325, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\Nic\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\conv.py", line 277, in forward
self.padding, self.dilation, self.groups)
File "C:\Users\Nic\Anaconda3\envs\pytorch\lib\site-packages\torch\nn\functional.py", line 90, in conv2d
return f(input, weight, bias)
RuntimeError: Given groups=1, weight[64, 4, 11, 11], so expected input[4, 6, 224, 224] to have 4 channels, but got 6 channels instead
Process finished with exit code 1