How do I forcefully release the memory of a tensor?

I want to forcefully release a tensor’s memory. I don’t care if there are references alive to it. Ideally, the tensor should throw an exception if it is accessed after being released.

I want this feature so that I release all tensor memory with a function like this:

def torch_gpu_remove_all():
    for obj in gc.get_objects():
        try:
            if torch.is_tensor(obj) or (
                hasattr(obj, "data") and torch.is_tensor(obj.data)
            ):
                if obj.is_cuda:
                    del obj
        except:
            pass

This will allow me to easily free up the memory without restarting the Jupyter kernel. I find this ability quite useful, especially when a GPU is being used intermittently by multiple users. This allows holding on to all state except those on a GPU, which is better than completely resetting the state.

Are you sure the “del” operation will free tensor? “del” will trigger the “free” in CUDACachingAllocator only when the ref count was 0. Actually, “del” just make the ref count decreate 1.