Dimensions issues between training and testing

Hi,

I have created an image data generator for my project, and using it I can train my model. It looks like something like that:

dim = 256
generator = ImageDataGenerator.Generator(ChannelFirst=True)
generator.setFlip(True)
generator.setRotate90x(True)
generator.LoadInputs("./myInputs/")
generator.LoadOutputs("./myOutputs/")
generator.setInputsDimensions(dim, dim)
generator.setOutputsDimensions(dim, dim)
BatchSize = 8
gen = generator.PyTorch(BatchSize, InputsNormalizer=Processing.Normalize, OutputsNormalizer=Processing.NormalizeBasic)
dl = DataLoader(gen, batch_size=BatchSize, shuffle=True, num_workers=4)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
myModel = myModel.to(device)
loss_fn = torch.nn.MSELoss(size_average=False)
learning_rate = 1e-4

for epoch in range(13):
    lossvalue = 0.0
    for b, batch in enumerate(dl):
        X, Y = batch['input'].to(device), batch['output'].to(device)		
        Ypred = myModel(X)

        loss = loss_fn(Ypred, Y) # Compute and print loss
        lossvalue += loss.item()

        myModel.zero_grad()

        loss.backward()

        with torch.no_grad(): # Update the weights using gradient descent.
                for param in unet.parameters():
                        param -= learning_rate * param.grad
    print("Epoch %d - loss = %f" % (epoch, lossvalue))
print("Training Done!")

So once I have created the image generator, it’s a classical training.

But then I create the exact same generator without the data augmentation, but it generates the exact same type of images, and I want to use it to test the prediction of my model after training.

generator = ImageDataGenerator.Generator(ChannelFirst=True)
generator.LoadInputs("./myInputs/")
generator.LoadOutputs("./myOutputs/")
generator.setInputsDimensions(dim, dim)
generator.setOutputsDimensions(dim, dim)
gen = generator.PyTorch(BatchSize, InputsNormalizer=Processing.Normalize, OutputsNormalizer=Processing.NormalizeBasic)
dl = DataLoader(gen, batch_size=BatchSize, shuffle=False, num_workers=4)

for b, batch in enumerate(dl):
	X, Y = batch['input'].to(device), batch['output'].to(device)		
	Z = myModel(X)

But Get the following error:

Traceback (most recent call last):
  File "Cluster.py", line 125, in <module>
    Z = myModel(X)
  ....
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [8, 3, 3, 3], but got input of size [8, 2, 3, 256, 256] instead

So I use the same batch size and image dimensions, but they do not work anymore with my model. And the dimensions required are not even the one I was using before.
What am I doing wrong?
Thanks in advance.

Could you print the shape of X before passing it to the model?

My bad. I don’t know why, but every time I instantiate a new ImageDataGenerator, the inner variable remain into memory. So every time I create a new ImageDataGenerator the new list of images is added to the previous one.

So this post can be closed.