Size of input tensor in a Linear Layer

I’ve started to work with pytorch a few weeks ago.(No prior knowledge of ML)

I want to biuld a image classifier that detects wheather an image is a cat or a dog.

I have data in the form of:

tensor([[1.6886e-06, 4.2819e-06, 1.3871e-06,  ..., 1.4353e-05, 1.4173e-05,
         1.3931e-05],
        [1.5680e-06, 4.9453e-06, 1.2062e-06,  ..., 1.5017e-05, 1.4896e-05,
         1.4173e-05],
        [1.3871e-06, 5.3072e-06, 2.5330e-06,  ..., 1.4052e-05, 1.3992e-05,
         1.3992e-05],
        ...,
        [5.8499e-06, 5.9706e-06, 5.2469e-06,  ..., 2.4727e-06, 1.8093e-06,
         2.5933e-06],
        [5.0056e-06, 5.1262e-06, 5.3072e-06,  ..., 3.6185e-06, 1.4474e-06,
         1.8696e-06],
        [4.8247e-06, 5.1865e-06, 5.2469e-06,  ..., 4.1613e-06, 3.2567e-06,
         1.8093e-06]])
tensor([[0., 1.],
        [0., 1.],
        [1., 0.],
        ...,
        [0., 1.],
        [0., 1.],
        [0., 1.]])

The image is a 50 by 50 gray scale and the output is a 1D tensor.

This is my Network Class:

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(50*50, 64)
        self.fc2 = nn.Linear(64, 64)
        self.fc3 = nn.Linear(64, 64)
        self.fc4 = nn.Linear(64, 2)
        
    def forward(self, x):

        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = self.fc4(x)
        
        return F.log_softmax(x, dim=1)  
        

net = Net()

This is my training loop using optim.Adam:

EPOCHS=6

for epoch in range(EPOCHS):
    for key in range(24946):
        X = data[key]
        y =out[key]
        net.zero_grad()
        output = net(X)
        loss = F.nll_loss(output, y)
        loss.backward()
        optimizer.step()
    print(loss)

However i get the following error:
RuntimeError: size mismatch, m1: [1 x 50], m2: [2500 x 64]

I believe this is due to the input to my first layer.
Can someone please explain how I would input each image to my Network?

Hello Shell!

I am guessing that this tensor is the X you input to your
network in net(X) and that is has shape [50, 50].

The problem is that pytorch networks (models) always
work with batches of input samples (even if you don’t
want them to – if you want to pass a single sample to a
network, you have to wrap it in a batch of batch-size 1).

So (according to my theory) pytorch is interpreting your
shape [50, 50] input tensor as a batch of 50 input samples,
where each sample has shape [50]. But your first Linear
layer is expecting an input of shape [2500], hence the
“size mismatch” error (50 != 2500).

You can print out X.shape (and y.shape, for that matter)
to see if this is what is going on.

(Note, Linear (50*50, 64) is exactly the same as
Linear (2500, 64). The fact that you write the number
2500 as 50*50 does not somehow tell the layer to accept
an input of shape [50, 50].)

You would first need to flatten() your input tensor X to
give it shape [2500], and then unsqueeze() it to turn in
into a batch (of batch-size 1 containing only one sample)
with shape [1, 2500].

Thus:
X = torch.unsqueeze (torch.flatten (X), dim = 0)

Good luck.

K. Frank