Slow batch inference

Hi all,

I’m using a trained LinkNet34 model. I was expecting that doing batch inference of size n would be faster than doing n times one single inference. Although, most of the time the batch computing shows only a slight advantage over n times single inference.

Am I missing something?

Screenshot from 2020-05-26 15-55-48

I’m only using CPU. I have applied model.eval() before. I also tried model.foward method, and with torch.no_grad() context. They didn’t affect the results.

Thanks in advance,

Hi,

I guess your model is big enough that relatively small batch size already uses your CPU completely.
This is why you better than linear going from 1 to 4. But then almost exactly linear above.