Is it possible to catch if .to(device) doesn't have enough memory?

If I have created a tensor on the CPU, and I want to move it to the GPU.

Is there a way to check whether there’s enough memory available on the GPU?

No, I don’t think so unless you disable our caching mechanism and use cudaMemGetInfo from torch.cuda.cudart().
In the default setup the caching allocator will be used to be able to reuse already allocated device memory without using the slow and synchronizing cudaMalloc/cudaFree calls excessively.
If a memory allocation fails, the allocator will try to free the cache and reallocate a larger block. Due to this behavior I don’t think there exist a clean way of checking the free memory without allowing the caching allocator to allocate the tensor using different approaches (by reusing the cache, allocating new memory, or freeing the cache before trying to allocate the memory).