Issue with different version of CUDNN

Hi,

I’m using a pretrained model for inference that is producing wrong results when I disable CUDnn. Details below.

I’ve two conda environments E1, and E2.
E1 - Pytorch 1.7.0 + CUDnn -8003
E2 - Pytorch 1.5.1 + CUDnn -7603

When I’m doing inference using a pretrained model E1 and E2 give exactly the same results.

If I add torch.backends.cudnn.enabled = False to my code E1 gives completely arbitrary results (perf drop from 50% to 2%), but E2 continues to work fine. I’m at my wits end as to might even be causing this.
Can someone help me with this?

Thanks!

This doesn’t seem to be a cudnn issue, if disabling it breaks the output or am I misunderstanding the issue?
Could you update PyTorch to 1.7.1 as well as the nightly release and test your model again, as there might have been a known issue in the native conv implementation (which would be used in cudnn is disabled), which might have been fixed already?