Conv1D on word embeddings

Let’s I have a mini-batch of 4 sentences. The number of words in the sentences are 9, 4, 10, and 7, respectively. I have to apply conv1D filter on the sentences. I am using embedding dimension of 300 for each word. I am using padding-index of 0 to form the mini-batch as sentences are of different lenghts. My input will shaped as [4, 10, 300]. To apply conv1D filter on it, it is reshaped to [4, 300, 10]. This way, the conv1D filter will not ignore the embeddings corresponding to the padding index.

How do I make sure that the effect of embeddings corresponding to the padding index is not taken into account? Like in GRU, we could explicitly pass the padding_idx to ignore it’s effect.

Do you mean how do you ignore the backprop through the padding sequence?
Actually, you don’t need to do anything about that. If your neural net is trained well enough (for sufficient train data and enough epochs), it’ll learn to just ignore those extra paddings.
A good way to test is to increase the sequence length to 100. You should still expect similar as you’d get with a sequence length of 10.