Caffe2 and Pytorch benchmark on inference time

Hello all, I am working on a project using pytorch. However, the speed is not so good enough in inference. Do you think the inference time will reduce if I use caffe 2? Thanks so much

1 Like

what does your network look like?
are you using our latest nightly binaries (they have CPU improvments by using MKLDNN).

My network is densenet based. All operator and network are using GPU. I used the last version of pytorch 1.0.0.dev20181125. I am wondering the performance of pytorch and caffe2 in inference time. Which one is more faster?

if you are using GPU, for Densenet, both frameworks should be same speed.