I ran my model with GPU turned off (CUDA_VISIBLE_DEVICES="") and I see this error:
RuntimeError: /opt/conda/conda-bld/pytorch-nightly_1543051141017/work/torch/csrc/autograd/profiler.cpp:131: no CUDA-capable device is detected
Does torch.autograd.profiler.profile require that a GPU be enabled? I wanted to get profiling info running on CPU only in addition to running with both.
For CPU, you can use your prefered python memory profiler like (memory-profiler) to do it.
For GPU, you can see functions like this that will give you the GPU memory used by Tensors.