RuntimeError: Given input size: (1 x 3 x 300). Calculated output size: (100 x 0 x 1). Output size is too small at /pytorch/torch/lib/THNN/generic/SpatialConvolutionMM.c:45

Hi all. I am hitting at the above error while training the below model. Can any one explain me what exactly the error means and how to resolve it ?

class CNN_Text(nn.Module):

def __init__(self):

    super(CNN_Text, self).__init__()
    V = 17419
    D =  300
    C = 4
    Ci = 1
    Co  = 100
    Ks = [3,4,5]
    self.dropout = 0.5
    self.embed = nn.Embedding(V, D)
    print self.embed.shape
    self.convs1 = nn.ModuleList([nn.Conv2d(Ci, Co, (K, D)) for K in Ks])
    self.dropout = nn.Dropout(self.dropout)
    self.fc1 = nn.Linear(len(Ks) * Co, C)

def conv_and_pool(self, x, conv):
    x = F.relu(conv(x)).squeeze(3)  # (N,Co,W)
    x = F.max_pool1d(x, x.size(2)).squeeze(2)
    return x

def forward(self, x):
    x = self.embed(x)  # (N,W,D)

    #if self.static:
    #x = Variable(x)+
    x = x.unsqueeze(1)  # (N,Ci,W,D)
    x = [F.relu(conv(x)).squeeze(3) for conv in self.convs1]  # [(N,Co,W), ...]*len(Ks)
    x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x]  # [(N,Co), ...]*len(Ks)
    x =, 1)
    x = self.dropout(x)  # (N,len(Ks)*Co)
    logit = self.fc1(x)  # (N,C)
    return logit

What version of pytorch are you on? I’m having trouble running the code due to some other errors.

The error generally means that one of modules you used computed an output size that is not possible, ie, (100, 0, 1). This could mean that the input you passed in was too small, or that there’s some bug somewhere.

I am on version 0.3.0.

Two quick questions:

What is the shape of the input you’re passing to CNN_Text? (So I can reproduce the error message; it’s not 1x3x300 is it?)

nit: The nn.Embedding module doesn’t have a shape attribute, yes? Why is there a print self.embed.shape line?

i thought of printing shape of embedding but seems to be wrong. My shape of input is (64L, 18L). I am new to this field and i don;t know what size fits and what size don’t fit

I’m having trouble running your model with CNN_Text(torch.randn(64, 18). In general, though, through some print statements like you’re doing right now you should be able to find out which layer you’re passing in the wrong input to. If it’s one of the conv layers, the docs have a formula for what size the output ends up to be.