I’m using a pretrained model for inference that is producing wrong results when I disable CUDnn. Details below.
I’ve two conda environments E1, and E2.
E1 - Pytorch 1.7.0 + CUDnn -8003
E2 - Pytorch 1.5.1 + CUDnn -7603
When I’m doing inference using a pretrained model E1 and E2 give exactly the same results.
If I add
torch.backends.cudnn.enabled = False to my code E1 gives completely arbitrary results (perf drop from 50% to 2%), but E2 continues to work fine. I’m at my wits end as to might even be causing this.
Can someone help me with this?