Question about torch.utils.bottleneck with --no-cuda on MNIST sample

When I try to mesure only CPU consumption time in MNIST, but failed to get with following error on PyTorch 0.4.0.
Of course, without measurement works fine.
How to avoid this issue?

$ python --no-cuda (it works fine.)
$ python -m torch.utils.bottleneck --no-cuda (it is failed with following error.)

bottleneck is a tool that can be used as an initial step for debugging
bottlenecks in your program.

It summarizes runs of your script with the Python profiler and PyTorch’s
autograd profiler. Because your script will be profiled, please ensure that it
exits in a finite amount of time.

For more complicated uses of the profilers, please see and for more information.
Running environment analysis…
THCudaCheck FAIL file=/pytorch/aten/src/THC/ line=25 error=2 : out of memory
Traceback (most recent call last):
File “/opt/conda/lib/python3.6/”, line 193, in _run_module_as_main
main”, mod_spec)
File “/opt/conda/lib/python3.6/”, line 85, in _run_code
exec(code, run_globals)
File “/opt/conda/lib/python3.6/site-packages/torch/utils/bottleneck/”, line 280, in
File “/opt/conda/lib/python3.6/site-packages/torch/utils/bottleneck/”, line 259, in main
File “/opt/conda/lib/python3.6/site-packages/torch/cuda/”, line 143, in init
File “/opt/conda/lib/python3.6/site-packages/torch/cuda/”, line 161, in _lazy_init
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/

MNIST source code
Bottleneck description
Related issue (source code is not attached.)
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/

This case is occured when the CUDA memory cannot allocated, but the CUDA device exist.
Of course, when we release the CUDA memory, it can run it.

But this problem occurs even when we plan to profile CPU only (not planned to take CUDA profile.

I guess it’s due to the cuda checks in
The torch.cuda.init() seems to be called if cuda is available. Maybe an argument to disable CUDA profiling at all would help.
As far as I know, @richard worked on this feature. Maybe he can give his opinion on this topic, if disabling CUDA would be a good idea.

1 Like