Use module.eval() after module.cuda() takes a very long time

Hi everyone.
I recently runned faster-rcnn_pytorch provided by longcw. And it takes over 700s running the
And later I found that it takes over 300s when running module.eval() after module.cuda(). However it doesn’t take such a long time when running module.eval() on cpu.

I install pytorch with anaconda and my gpu is TITAN X PASCAL with cuda 8.0.


It should be unrelated. You must have some bug in your benchmark script

I solve this problem now!!~~~
I installed pytorch from source from the beginning, and install magma-cuda80.
And later I installed torchvision with conda
conda install torchvision -c soumith

it forced me to update to pytorch-cuda75 …

Now I install both pytorch and torchvision from source, problem solve