device = None
if torch.cuda.is_available():
device = torch.device("cuda")
print("cuda is available!")
print('Memory Usage:')
print('Max Alloc:', round(torch.cuda.max_memory_allocated(0)/1024**3, 1), 'GB')
print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB')
print('cuDNN: ', torch.backends.cudnn.version())
else:
device = torch.device("cpu")
this gives me
cuda is available!
Memory Usage:
Max Alloc: 0.0 GB
Allocated: 0.0 GB
Cached: 0.0 GB
cuDNN: 7604
the problem is that these are the first few lines following module importing.
so this should not happen.
ive installed cuDNN and cuda, each is 7.6.4, and 10.1.
torch version is 1.5.0+cu101
so when i try to run this code it gives me error like
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
on
model = GRU(input_size, hidden_size, num_layers).to(device)