RuntimeError: Expected 4-dimensional input for 4-dimensional weight [500, 1, 5, 5], but got 3-dimensional input of size [1, 250, 250] instead

I saw these kind of topics a lot in forum but generally batch dimension is missing in their tensors. But in my case channel dimension is missing and I don’t know how to put it. Here is my code

class MyMatDataset(Dataset):

def __init__(self, distFunc_paths, flowfield_paths):
    self.distFunc_paths = distFunc_paths
    self.flowfield_paths = flowfield_paths
                
def __getitem__(self, index):
    x = scipy.io.loadmat(self.distFunc_paths[index])
    x = torch.from_numpy(x['MD'])
    y = scipy.io.loadmat(self.flowfield_paths[index])
    y = torch.from_numpy(y['z1'])
    return x, y

def __len__(self):
    return len(self.distFunc_paths)

root = “./Data/”
distFunc_paths = []
flowfield_paths = []
for r, d, f in os.walk(root):
for files in f:
if “_distFunc.mat” in files:
distFunc_paths.append(osp.join(r, files))
if “_flowfield.mat” in files:
flowfield_paths.append(osp.join(r,files))

dataset = MyMatDataset(distFunc_paths = distFunc_paths, flowfield_paths = flowfield_paths)

train_loader = torch.utils.data.DataLoader(
dataset, batch_size=1, shuffle=True
)

class CFD_CNN(nn.Module):
def init(self, out_ch=500):
super(CFD_CNN, self).init()
self.encoder = nn.Sequential(
nn.Conv2d(1, out_ch, kernel_size=5, stride=5, padding=0),
nn.ReLU(True),
nn.Conv2d(out_ch, out_ch, kernel_size=5, stride=5, padding=0),
nn.ReLU(True),
nn.Conv2d(out_ch, out_ch, kernel_size=2, stride=2, padding=0),
nn.ReLU(True),
)

    self.decoder = nn.Sequential(
            nn.ConvTranspose2d(out_ch, out_ch, kernel_size=2, stride=2, padding=0),
            nn.ReLU(True),
            nn.ConvTranspose2d(out_ch, out_ch, kernel_size=5, stride=5, padding=0),
            nn.ReLU(True),
            nn.ConvTranspose2d(out_ch, 1, kernel_size=5, stride=5, padding=0),
            nn.ReLU(True)
    )
    
def forward(self, x):
    x = self.encoder(x)
    print(x.size())
    print(x.size())
    x = self.decoder(x)
    print(x.size())
    return x

net = CFD_CNN()

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

for epoch in range(2):

running_loss = 0.0
for i, data in enumerate(train_loader,0):
    
    distfunc, flowfields = data
    
    optimizer.zero_grad()

    outputs = net(distfunc)
    
    loss = criterion(outputs, flowfields)
    
    loss.backward()
    
    optimizer.step()
    
    running_loss += loss.item()

print(“Finished Training”)

Thanks in advance.

Your first convolution has in_channels=1,
so the tensor given to the model should be of shape [batch_size, 1, height, width].
It seems that it receives instead a tensor of shape [1, height, width].

If the 1 already represents your batch_size=1, then you need to add another channel dimension before the height (so at position 1):

outputs = net(distfunc.unsqueeze(dim=1))
1 Like

Thanks but I already solved it. I have another question. Why are my tensors have extra brackets like below one? Am i doing something wrong or it is ok? I am new in pytorch and still learning as you can see. Thanks.

In [35]: distfunc
Out[35]:
tensor([[[[0.7022, 0.6965, 0.6908, …, 0.6956, 0.7012, 0.7069],
[0.6995, 0.6937, 0.6879, …, 0.6927, 0.6984, 0.7040],
[0.6967, 0.6909, 0.6851, …, 0.6899, 0.6955, 0.7012],
…,
[0.6967, 0.6909, 0.6851, …, 0.6899, 0.6955, 0.7012],
[0.6995, 0.6937, 0.6879, …, 0.6927, 0.6984, 0.7040],
[0.7022, 0.6965, 0.6908, …, 0.6956, 0.7012, 0.7069]]]],
dtype=torch.float64)

Extra brackets means extra dimensions.
A tensor with 1 dimension will be printed with 1 opening bracket, kind of like a list .
A tensor with 2 dimensions will be printed with 2 opening brackets, kind of like a list of lists .
A tensor with 4 dimensions will be printed with 4 opening brackets, kind of like a list of lists of lists of lists .

Like in the movie Inception they go 4 levels deep, so we see them start the dream machine 4 times.

Simply add batch_size dimension to your input, because it is essential in PyTorch to provide batch_size information even you have single image.
For instance: your input is ==> (3, 227, 227) you can easily add extra dimension for batch_size as follow:
inputs = inputs.unsqueeze(0)
In this case the shape of your input will be: (1, 3, 227, 227).