CNN training on multiple images

Hi, I am quite new to PyTorch.
I am trying to set up a simple Convolutional Neural Network (CNN) but am having trouble training on multiple images. The images are ones i create myself have have size = (50,50), I can create as many as i want.

Currently I have been training on a single image by transforming to

X = torch.tensor(image, dtype=torch.float)
X = X.view(1,1,image.shape[0],image.shape[1])

before training. So the input is of size (1,1,50,50).
My question is how to I expand my code to train on different images?

If possible could you give a general example of what I need to do? I have searched the forums but to no avail.


What do u mean by training on “different images”? If u mean more images in a single batch, u can concatenate/stack multiple images together. Pytorch 2d conv expects an input of (B, C, H, W) where B is the batch size. If each images are of shape (1, 1, H, W), u can do; if it’s of shape (H, W), u can do tc.stack(IMAGES).unsqueeze_(1).