Shape of Conv1D followed by pooling layer

I have a batch of variable length sequences. I pad them to a fixed length and then apply a Conv1D with dilation and strides > 1 layer followed by MaxPool layer. I want to figure out the extent of the padding signal in the output sequence i.e. which part of the sequence is made up of contributions only from the padding tokens. Right now I am doing output of conv layer = math.ceil(non_zero_seq_len / strides) for both the convolution layer and the pooling layer. This gives an answer in the ballpark but I am not completely sure if its right. I would eventually like to mask these new tensors so I can pass these into a transformer later.

It would be great if i could just get some input on whether I am going about it in the right way.