I am trying to measure CPU usage on MNIST example on PyTorch 0.4.0 by following commands, but it failed.
How to avoid this issue?
$ python -m torch.utils.bottleneck main.py --no-cuda
===
Traceback (most recent call last):
File “/opt/conda/lib/python3.6/site-packages/torch/utils/bottleneck/main.py”, line 149, in run_prof
exec(code, globs, None)
File “main.py”, line 110, in
main()
File “main.py”, line 105, in main
train(args, model, device, train_loader, optimizer, epoch)
File “main.py”, line 29, in train
for batch_idx, (data, target) in enumerate(train_loader):
File “/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 264, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File “/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 264, in
batch = self.collate_fn([self.dataset[i] for i in indices])
File “/opt/conda/lib/python3.6/site-packages/torchvision/datasets/mnist.py”, line 77, in getitem
img = self.transform(img)
File “/opt/conda/lib/python3.6/site-packages/torchvision/transforms/transforms.py”, line 49, in call
img = t(img)
File “/opt/conda/lib/python3.6/site-packages/torchvision/transforms/transforms.py”, line 143, in call
return F.normalize(tensor, self.mean, self.std)
File “/opt/conda/lib/python3.6/site-packages/torchvision/transforms/functional.py”, line 167, in normalize
for t, m, s in zip(tensor, mean, std):
File “/opt/conda/lib/python3.6/site-packages/torch/tensor.py”, line 361, in
return iter(imap(lambda i: self[i], range(self.size(0))))
RuntimeError: /pytorch/torch/csrc/autograd/profiler.h:53: out of memory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/opt/conda/lib/python3.6/runpy.py”, line 193, in _run_module_as_main
“main”, mod_spec)
File “/opt/conda/lib/python3.6/runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “/opt/conda/lib/python3.6/site-packages/torch/utils/bottleneck/main.py”, line 280, in
main()
File “/opt/conda/lib/python3.6/site-packages/torch/utils/bottleneck/main.py”, line 261, in main
autograd_prof_cpu, autograd_prof_cuda = run_autograd_prof(code, globs)
File “/opt/conda/lib/python3.6/site-packages/torch/utils/bottleneck/main.py”, line 155, in run_autograd_prof
result.append(run_prof(use_cuda=True))
File “/opt/conda/lib/python3.6/site-packages/torch/utils/bottleneck/main.py”, line 149, in run_prof
exec(code, globs, None)
File “/opt/conda/lib/python3.6/site-packages/torch/autograd/profiler.py”, line 191, in exit
records = torch.autograd._disable_profiler()
RuntimeError: /pytorch/torch/csrc/autograd/profiler.h:53: out of memory
References
mnist example
https://github.com/pytorch/examples/blob/master/mnist/main.py
bottleneck
https://pytorch.org/docs/stable/bottleneck.html