Pytorch tensors make Pycharm step-by-step debugging unbearably slow

As soon as I instantiate one ore more tensors via torch.from_numpy() step-by-step debugging becomes progressively slower. To give an idea it might take more than 15-20 seconds to step over a simple line of code that doesn’t do anything special after instantiating a few CPU tensors of size 3x540x480.

Turning off the visualization of inline variables doesn’t help. Very similar code implemented in TensorFlow doesn’t cause the same debugging issues.

I am running pytorch 0.2.0_3 + pycharm 2017.2.3 (community edition) on Ubuntu 16.04 LTS.

Is anyone experiencing the same issue? Any idea on how to fix it?

Thanks

8 Likes

Does that also happens with Tensors directly created by PyTorch (with zeros or ones for example) ?

Are GPU tensors also impacted?

Do you have a small code snippet we can use?

Can you check in top/htop if there is high memory/CPU usage at the same time.

Hi Mamy,

It happens also with tensors created by PyTorch, whether they are CPU or GPU tensors.

To repro the issue you can try to run step-by-step the code below PyCharm’s debugger:

import torch as np

tensors = []
num_tensors = 16
shape = (1, 3, 512, 512)
for i in range(num_tensors):
    tensors.append(np.zeros(shape))

As one steps through the inner loop debugging will getting slower and slower.
It seems the debugger is trying to collect the tensor data in order to visualize it in the variables pane.
The more data we have the slower it gets…

Memory wise I have 10GB+ available when I run it, so memory doesn’t seem to be an issue.
Instead, single CPU core utilization goes up to over 90%, mostly spent running PyCharm’s code: pycharm/helpers/pydev/pydevd.py

If we replace torch with numpy the problems goes away.

Let me know if you need more data and thanks for helping.

To verify is not an OS specific problem I also tested the code above on a different machine with Windows 10 (using peterjc123’s conda package of PyTorch for Windows) and the issue persists.

Disabling all forms of code introspection and inline variables visualization in PyCharm doesn’t seem to help either.

Also verified the problem persists in python 2.7 and python 3.6 tested on Ubuntu and Windows 10 :frowning:

Given how popular PyCharm is these days I am surprised this is not a known issue.

Any idea?

Thanks.

I have no idea at all, sorry :/. I don’t use PyCharm (I’m in Jupyter/VScode camp), I’m surprised disabling code introspection didn’t solve anything.

Also others reported they use PyCharm without any issue: Python ide for pytorch

Thanks for trying to help.

I also submitted the same issue to the developers: https://intellij-support.jetbrains.com/hc/en-us/community/posts/115000620584-Step-by-step-debugging-is-very-slow-with-PyTorch-code

A temporary fix for the problem can be found here:
https://youtrack.jetbrains.com/issue/PY-12987

1 Like

@deepgfx According to the kanban in https://youtrack.jetbrains.com/issue/PY-12987 and release blog https://blog.jetbrains.com/pycharm/2017/10/pycharm-2017-3-eap-5/ , this issue has been fixed in 2017.3 eap 5.
I’m downloading and testing it :wink:.

John,

Thanks for the update.

I’ve just tested this new release and It is a bit better, but debugging while instantiating pytorch arrays is still much slower than using regular numpy arrays.

For whatever reason it seems the debugger can collect data about the latter much more rapidly. Making the data collection asynchronous is a step in the right direction but it doesn’t directly fix the issue with pytorch arrays.

2 Likes

I wholehearted agree. It’s practically impossible to debug Pytorch code in PyCharm, especially with a remote configuration!

1 Like

Thankfully, I don’t have this problem.

Have you tried pytorch 0.3 with pycharm 2017.3?

@antspy, I tried this morning upgrading to Pytorch 0.3 with PyCarm 2017.3. With a remote debugger configuration and big tensors it fails to load them with the message “Timeout Exceed.” where the same exact tensors as numpy arrays are fine to load and inspect.

One more here with the same problem. Huge pain in the ass. Sometimes the debugging is OK, but sometimes not, for example when debugging with Datasets. The issue seems to be the __repr__ of certain Variables or whatnot (but I guess this was known already).

One clunky workaround is in Variables tab to undock the Watches tab separately, then dock it on top of Variables tab and make it active. Or better yet, hide the Variables window. Then every time you need to inspect something add it to watches, then delete or inspect from interactive console. This is clunky but better than nothing. The old pydevd_xml.py workaround doesn’t seem to work anymore.

Another workaround is to switch to VSCode :slight_smile:

An easy fix is to change variable loading policy to On demand

https://www.jetbrains.com/help/pycharm/variables-loading-policy.html

This fix works perfectly: https://stackoverflow.com/a/51833034/1334473

2 Likes