DQN with bitmap as input

I am trying to implement the DQN algorithm from the example, however, the environment that I use has as states bitmaps. When I put my state in the training loop, I get error in:
return policy_net(state).max(1)[1].view(1, 1), for choosing the next action that isn’t exploration (the random action). The error is:
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [16, 3, 5, 5], but got 2-dimensional input of size [300, 300] instead.
My input is indeed [300,300], but it expects 4-dimensional weight []. I believe this is because of the get_screen function from the cart game using the BCHW format tensor, after transposing and unsqueezing the rgb_array from the screen. Since I don’t have rgb values but a bitmap I can’t have the BCHW format tensor. So I wanted to ask, how to best go about implementing a DQN algorithm. Is there perhaps another “skeleton” algorithm that I can use?

It looks like you would like to pass a grayscale image into the model.
Based on the error message it also sounds like the first layer is an nn.Conv2d layer with in_channels=3 and out_channels=16.
If that’s the case, you should unsqueeze dim0 and dim0 for your input (conv layers expect an input of [batch_size, channels, height, width]), and change the number of input channels of the first conv layer to 1.

input = input[None, None, :, :]

Let me know, if that helps.