What exactly are you trying to achieve? The only relation between batchsize and number of parameters is the memory they occupy (for a fixed amount of GPU memory if your model is larger, the maximum batchsize will be lower)
i am just trying to figure out relation between these two parameters.
So for example:
model has parameters 100k which has batch size of 16
then optimized model has 80k can i feed batch size greater than 16 here?? is it correct