Cuda allocates 0 memory

device = None

if torch.cuda.is_available():
    device = torch.device("cuda")
    print("cuda is available!")
    print('Memory Usage:')
    print('Max Alloc:', round(torch.cuda.max_memory_allocated(0)/1024**3, 1), 'GB')
    print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
    print('Cached:   ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB')
    print('cuDNN:    ', torch.backends.cudnn.version())

else:
    device = torch.device("cpu")

this gives me

cuda is available!
Memory Usage:
Max Alloc: 0.0 GB
Allocated: 0.0 GB
Cached:    0.0 GB
cuDNN:     7604

the problem is that these are the first few lines following module importing.
so this should not happen.
ive installed cuDNN and cuda, each is 7.6.4, and 10.1.
torch version is 1.5.0+cu101

so when i try to run this code it gives me error like
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

on

model = GRU(input_size, hidden_size, num_layers).to(device)

If you just import torch without running or creating any CUDA operations or tensors, the GPU memory should be empty (a CUDA context might be initialized, but wouldn’t show up in torch.cuda.memory_allocated()).

For the CUDNN_STATUS_INTERAL_ERROR:
could you please post a reproducible code snippet?
Also, did you install the PyTorch binaries or built from source? Which GPU are you currently using and is the code running fine on the CPU?

Thank you for answering!! It somehow no longer throws an error after I rebooted my desktop. Thank you!