Hi! I am trying to train a model using Pytorch and I found that enabling cuDNN would speed up the training process. However, training got significantly slowed down instead. Can anyone help me with this issue? I performed some minor tests and got this result:
Enable Cudnn
Conv: 0.5449938774108887
Conv backward: 0.1148219108581543
---------------------------
Disable Cudnn
Conv: 0.01213216781616211
Conv backward: 0.13735365867614746
---------------------------
Here is the code snippet that I used for testing:
import torch
import time
def run():
in_c = 10
out_c = 15
kernel = 3
padding = 1
inp = torch.rand(512, in_c, 128, 128, requires_grad=True).cuda()
conv = torch.nn.Conv2d(in_c, out_c, kernel, padding=padding, bias=False).cuda()
torch.cuda.synchronize()
start = time.time()
out = conv(inp)
torch.cuda.synchronize()
print("Conv: ", time.time() - start)
start = time.time()
out.sum().backward()
torch.cuda.synchronize()
print("Conv backward: ", time.time() - start)
print('---------------------------')
if __name__=='__main__':
torch.backends.cudnn.enabled=True
print('Enable Cudnn')
run()
torch.backends.cudnn.enabled=False
print('Disable Cudnn')
run()