Build extensions with/without CUDA

What does torch.cuda.is_avaialble() result mean? Does it mean the CUDA tools (nvcc,…) is available, or does it mean CUDA code can run (nvidia GPU/drivers/…)?
I would like to build extensions if CUDA tools are available even if no GPU is present. It seems like I could import CUDaExtension only to fail when building starts.

My current workaround is to try-catch everything.

    from torch.utils.cpp_extension import CUDAExtension
    import torch
    assert torch.cuda.is_available(), "No CUDA found"
except (ImportError, OSError, AssertionError) as e:
    CUDAExtension = None
    print("No CUDA was detected, building without CUDA error: {}".format(e))

But there should be a standard way?