Basic CNN question

Hi !
I’d like to use CNN for reinforcement Learning and I have a question regarding the size of the first dense layer after the feature maps.

I have a setup that seems to work for now: Input would be a (1,1,50,50) Tensor ie: One image with only one channel with dim 50x50. It goes in conv2d filter, a non-linear activation. Afterwards, it is flatten and sent to the dense layer to be used for action selection.
I’d like to send multiples image to train the network offline (ie: after finishing an episode, I want to pack all the states, all the images, and send them at once so I can get state estimations). Doing so, I always meet an error saying that the first dense layer after filters doesn’t have a good dimensionality.
Does it mean that I have to keep sending them one by one ?

Thanks !

You should be able to make batches of images of shape (n_images, 1, 50, 50) and train with those.

My guess is that the code you use to flatten the cnn output is incorrect, but just happens to work when sending images 1 by 1. My guess is that your code turns the 4d output of the cnn into a 1d tensor, thus absorbing the batch dimension.

This should work…

dense_input = cnn_output.view(cnn_output.size(0), -1)

Yop, Indeed, it was exactly so !
Thanks!