Seq2seq Inference slow

I tried the seq2seq pytorch implementation available here pytorch-seq2seq. After profiling the evaluation(evaluate.py) code, the piece of code taking longer time was the decode_minibatch method( github.com/MaximumEntropy/Seq2Seq-PyTorch/blob/master/evaluate.py#L74)

Trained the model on GPU and loaded the model in CPU mode to make inference. But unfortunately, every sentence seems to take ~10sec. Is slow prediction expected on pytorch?

Any fixes, suggestions to speed up would be much appreciated. Thanks.

hello, I wonder know if you have solved the problem in the speed of seq2seq ?

Hi,
Looks like pytorch is not Optimized for CPU as it is done for GPU. And also on their high priority list to fix the same

You Can refer the below links on the same.

twitter.com/haldaume3/status/900775899431305217

If you are wondering what I did next after this bottleneck. I switched to the tensorflow implementation.