RuntimeError: Kernel size can't greater than actual input size

I’m having the following error

RuntimeError: Calculated padded input size per channel: (1 x 4). Kernel size: (1 x 5). Kernel size can't greater than actual input size at /pytorch/aten/src/THNN/generic/SpatialConvolutionMM.c:48

Following is my Encoder

class ConvSentEncoder(nn.Module):
    Convolutional word-level sentence encoder
    w/ max-over-time pooling, [3, 4, 5] kernel sizes, ReLU activation
    def __init__(self, vocab_size, emb_dim, n_hidden, dropout):
        self._embedding = nn.Embedding(vocab_size, emb_dim, padding_idx=0)
        self._convs = nn.ModuleList([nn.Conv1d(emb_dim, n_hidden, i,stride=2)
                                     for i in range(3, 6)])
        self._dropout = dropout
        self._grad_handle = None

    def forward(self, input_):
        emb_input = self._embedding(input_)
        conv_in = F.dropout(emb_input.transpose(1, 2),
        output =[F.relu(conv(conv_in)).max(dim=2)[0]
                            for conv in self._convs], dim=1)
        return output

    def set_embedding(self, embedding):
        """embedding is the weight matrix"""
        assert self._embedding.weight.size() == embedding.size()

with pytorch 0.4.0, cuda 9.0
I used:
batch_size: 64
the dimension of word embedding: 128
the number of hidden units of Conv: 100
maximun words: 100
maximun sentences: 60

I tried changing parameters but still got errors. Hope you will give me a solution.