Is it normal that when I use a pretrained model on GPU and transform it to CPU the performance decreases?
And is there a why to equalize this difference?
As can be seen, the CPU can not detect the America box
While the GPU model can:
Is it normal that when I use a pretrained model on GPU and transform it to CPU the performance decreases?
And is there a why to equalize this difference?
As can be seen, the CPU can not detect the America box
While the GPU model can: