# Why convnet don't accept my 2D input

Hi,
I’m very confused when first creating a convolutional network for classification. In input I have an [13,13] tensor but the convnet needs a 4-D input, how comes? and how can I convert my input to fit the convnet?
Folowing is my implementation of the convnet, then an example of the shape of tensors I use in input, then the error I get.

``````class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
# Convolution 1
self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=4, stride=1, padding=3)
self.relu1 = nn.ReLU()

# Max pool 1
self.maxpool1 = nn.MaxPool2d(kernel_size=2)

# Convolution 2
self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=2)
self.relu2 = nn.ReLU()

# Max pool 2
self.maxpool2 = nn.MaxPool2d(kernel_size=2)

self.fc1 = nn.Linear(32 * 4 * 4, 13 * 13)

def forward(self, x):
# Convolution 1
out = self.cnn1(x)
out = self.relu1(out)

# Max pool 1
out = self.maxpool1(out)

# Convolution 2
out = self.cnn2(out)
out = self.relu2(out)

# Max pool 2
out = self.maxpool2(out)

out = self.fc1(out)

return out
``````

Example of input:

``````tensor([[0., 0., 1., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 2., 0., 2., 2., 1., 0., 0., 0., 0., 2., 0.],
[0., 0., 0., 1., 2., 2., 2., 1., 0., 0., 0., 1., 0.],
[0., 1., 0., 2., 1., 1., 2., 1., 1., 1., 0., 0., 2.],
[0., 0., 0., 2., 2., 1., 0., 2., 2., 1., 0., 2., 0.],
[0., 2., 0., 2., 2., 1., 1., 1., 2., 1., 1., 1., 2.],
[0., 0., 0., 1., 2., 0., 0., 1., 2., 2., 0., 0., 0.],
[0., 0., 0., 2., 0., 1., 1., 1., 0., 2., 0., 2., 0.],
[0., 0., 1., 2., 0., 0., 1., 0., 1., 0., 0., 0., 2.],
[0., 1., 2., 2., 0., 1., 2., 0., 1., 2., 0., 0., 0.],
[0., 2., 2., 2., 2., 2., 1., 0., 0., 2., 0., 0., 0.],
[0., 1., 1., 1., 0., 2., 2., 2., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 1., 1., 2., 0., 1., 0., 0., 0., 0.]])
``````

The error I get when applying the convnet to a number of tensors (one by one) like the one above.

``````RuntimeError: Expected 4-dimensional input for 4-dimensional weight [16, 1, 4, 4], but got input of size [13, 13] instead
``````

The input to the convolution layer must be a 4-dimensional tesnor with the following dimensions: `[batch_size, channels, width, height]`.

So, if you have a single example, and a single channel, that means your input should be of size `[1, 1, 14, 14]`. So, all you need to do is to reshape your input to fit the dimensionality requirement using one of the following ways:

``````# using reshape:
>>> x = x.reshape(1, 1, 14, 14)

# or using view():
>>> x = x.view(1, 1, 14, 14)

>>> x.shape
torch.Size([1, 1, 14, 14])
``````

Then, pass this input to your model.

Thanks for your reply. But a tensor with dimension (1,1,14,14) still get the error of dimension:
`RuntimeError: size mismatch, m1: [128 x 4], m2: [512 x 169] at /Users/soumith/code/builder/wheel/pytorch-src/aten/src/TH/generic/THTensorMath.cpp:2070`

I don’t know why (1,1,13,13) which I thought should work still give dimension error. I’ve calculated the appropriate padding, kernel size and stride in order not to have these dimensions issues, I’ve used the expression: Out_width = ( in_width - kernel_width + (2*padding) )/1 +1

So what I understand is that:
input dim should be : (1, 1, 13, 13)
then after cnn1: (16, 16, 16) (because kernel=4, padding=3, out_channels=16)
then after maxpool1: (16, 8, 8) (because kernel=2)
then after cnn2: (32, 8, 8) (because kernel=5, padding=2, out_channels=32)
then after maxpool2: (32, 4, 4) (because kernel=2)
then after fc1: (13*13)

The error is thrown, because you are not flattening `out` before passing it to the linear layer.
Add this line of code before `self.fc1` is called: `out = out.view(out.size(0), -1)` and it should work.