Does torch.autograd.profiler.profile require a GPU?

I ran my model with GPU turned off (CUDA_VISIBLE_DEVICES="") and I see this error:

RuntimeError: /opt/conda/conda-bld/pytorch-nightly_1543051141017/work/torch/csrc/autograd/profiler.cpp:131: no CUDA-capable device is detected

Does torch.autograd.profiler.profile require that a GPU be enabled? I wanted to get profiling info running on CPU only in addition to running with both.

Hi,

No it should not.
Make sure to set use_cuda=False when creating the profiler.

Is there a tool to profile the memory usage of tensors generated during network forward&backward?

For CPU, you can use your prefered python memory profiler like (memory-profiler) to do it.
For GPU, you can see functions like this that will give you the GPU memory used by Tensors.