Question about torch.utils.bottleneck with --no-cuda on MNIST sample

I guess it’s due to the cuda checks in bottleneck.py.
The torch.cuda.init() seems to be called if cuda is available. Maybe an argument to disable CUDA profiling at all would help.
As far as I know, @richard worked on this feature. Maybe he can give his opinion on this topic, if disabling CUDA would be a good idea.

1 Like