Inference on cpu is very slow

I use gpu to train ResNet and save the parameters. Then I load the parameters and use ResNet on the cpu to do inference. I find that the time cost is high, about 70 ms/image. If I run this net on GPU, the time cost will be 11 ms/image. The Pytorch version is 1.0.1.
The parameters in net look like the following:

Parameter

The code of saving parameters looks like:

torch.save(net.state_dict(), path)

The code of inference looks like:
net.load_state_dict(torch.load(path, map_location=‘cpu’))

Is my speed of inference on the CPU normal? How can speed up the inference.

My only suggestion would be to make sure you’re running predictions within with torch.no_grad(): context to make sure you’re not needlessly calculating gradients.

Thanks for suggestion. I run predictions with net.eval(). After reading your suggestion, I try to use with torch.no_grad(): . The problem still exits.

Hi Lionkun!

As I understand it it, you’re saying that this problem runs six or
seven times faster on your GPU than on your CPU. This sounds
very reasonable – to me it doesn’t indicate that you’re doing
anything wrong. After all, the whole point of GPUs is that they’re
faster (for certain kinds of problems, such as this.)

I don’t have a sense as to whether 70 ms. / image is reasonable
for your specific problem, but it doesn’t sound outlandish.

Could there be tweaks that could gain you more performance?
Sure, and maybe some experts here will have some suggestions.

But if it were me, and I needed the increased performance, I would
(in order of preference)

  1. Use the GPU – that’s kind of the whole point of things like
    pytorch. (And maybe this means getting a GPU for your
    inference platform.)

  2. Try to find or develop a more compact – smaller, cheaper,
    faster – net that gives you the same (or at least adequate)
    inference performance.

  3. Get a faster CPU.

Best regards.

K. Frank