Cnn in deep q learning

I have a network like this:

c, h, w = input_dim

self.online = nn.Sequential(
nn.Conv2d(in_channels=c, out_channels=32, kernel_size=8, stride=4), nn.ReLU(), nn.Conv2d(in_channels=32, out_channels=64, kernel_size=4, stride=2), nn.ReLU(), nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1), nn.ReLU(), nn.Flatten(), nn.Linear(352764,512), nn.ReLU(), nn.Linear(512, output_dim) )

my input shape is (3,250,315) and my actual inputs are images, which I use transforms.ToTensor before passing to neural net.

but when I run it, I get this error:

RuntimeError: Given groups=1, weight of size [32, 315, 8, 8], expected input[8, 3, 250, 315] to have 315 channels, but got 3 channels instead.

my actual inputs are images, which I use transforms.ToTensor before passing to neural net.

Based on the error message the first nn.Conv2d layer raises the error since c=315 is set. Since you are using 3 input channels, set in_channels=3 and it should work. If you are still running into this issue, your images might be in channels-last memory format. In this case you would need to permute them to the expected channels-first layout.

Thank you. Indeed the problem was c. I think the main problem was using torchvision.transforms.ToTensor which already changes the shape of the input in the order CHW, so I did not need to do it myself, but I did.