Somthing worry in C3D net about pytorch grayscale channel

I built a C3D network.
self.conv1 = nn.Conv3d(3, 64, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool1 = nn.MaxPool3d(kernel_size=(1, 2, 2), stride=(1, 2, 2))
self.conv2 = nn.Conv3d(64, 128, kernel_size=(3, 3, 3), padding=(1, 1, 1))
self.pool2 = nn.MaxPool3d(kernel_size=(2, 2, 2), stride=(2, 2, 2))

When I enter the color picture sequence can be normal stitching.
clip = np.array([resize(io.imread(frame), output_shape=(112, 200), preserve_range=True) for frame in clip])
clip = clip[:, :, 44:44+112, :]
But when I enter a grayscale image sequence£¬can cause
IndexError: too many indices for array
so I change it
clip = clip[:, :, 44:44+112]
so that can stitching.

Meanwhile I change network
self.conv1 = nn.Conv3d(1, 64, kernel_size=(3, 3, 3), padding=(1, 1, 1))
But still making mistakes
expected stride to be a single integer value or a list of 2 values to match the convolution dimensions, but got stride=[1, 1, 1]
Should I expand the channel and if so, how do I do it?

Your last error message might be a misleading error due to a missing batch dimension in your input.
It has been fixed in master.
Could you unsqueeze at dim0 and try it again?

1 Like

yes,l print input shape and find you’re right
unsqueeze can add dim.(1,…)
thanks

and l have a problem.
l want to build network with siamese and c3d.can you give me some advice about the loss for 2 outputs

Here you can find an implementation of a Siemese-Network.
I’m not sure, what c3d means.

it’s conv3d.
the demo is conv2d.
I would like to know if the loss function can be used in the same way

The model in the repo flattens the output and applies to criterion on it, so it should work if you do the same using Conv3d.