'Fixing' Output Shape of the model layers in Pytorch model

I have a pytorch model for a specific computer vision task. And when I print the model summary using:

from torchsummary import summary
model = Model() <- Call to Model
summary(model, (3, 256, 256))

I get the following output:

----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 48, 256, 256]           1,296
 OverlapPatchEmbed-2         [-1, 48, 256, 256]               0
BiasFree_LayerNorm-3            [-1, 65536, 48]              48
         LayerNorm-4         [-1, 48, 256, 256]               0
            Conv2d-5        [-1, 144, 256, 256]           6,912
            Conv2d-6        [-1, 144, 256, 256]           1,296
            Conv2d-7         [-1, 48, 256, 256]           2,304
         Attention-8         [-1, 48, 256, 256]               0
BiasFree_LayerNorm-9            [-1, 65536, 48]              48
        LayerNorm-10         [-1, 48, 256, 256]               0
                                     .
                                     .

Now this means the size of the tensor is dynamic depending upon the batch size of the input. I want to do away this and hardcode/fix the Output Shape, such that Output Shape for each layer of my model becomes Static(with batch size = 1). For example, in the above scenario it should show Conv2d-1 Output shape [-1, 48, 256, 256] as [1, 48, 256, 256] (such that pytorch summary also shows this).

How can I achieve this ? I require this as when I convert my pytorch model to tflite it complains about this dynamic nature of the tensor shape.

Any help/suggestions would be helpful.
PS: I tried hardcoding the output of each layers to be of static shape such that batch size is 1 inside the forward function of each layers, using

out = model_layer()
out = out.view(1, out.shape[0],  out.shape[1], out.shape[2])

But this did not work, as I still get the same output from torch summary.

Regards!

This issue stops me from correctly running my tflite model on GPU. Where it complains about presence of dynamic-sized tensors in the computational graph. I converted my Pytorch Model → ONNX → Tflite Model then running on GPU on a smartphone. Investigating this issue, I ran Pytorch Summary shows Batch Size=-1, which reflects dynamic-sized tensor.

I am speculating this might be the reason why I am getting the afore-mentioned error. Eventhough I explicitly set the batch size as 1, inside the forward function of each layers, I still see the -1 as the batch size under Pytorch Summary.

Any suggestion to resolve this issue would be really helpful.
Regards!