Pytorch model inference is slower when running inside Django server

I have a custom Pytorch vision model which takes <4 seconds for performing inference on 5 images on CPU when calling as a standalone python program, but when the same inference method is called from Django server on CPU, the time for getting output is ~200 seconds. How can I solve this issue? I tried increasing number of threads from 4 to 16, still it doesn’t work.