My device is Ubuntu 24 and RTX 5070, I’m using the latest PyTorch 2.7 with CUDA 12.8. PyCharm keeps showing ‘running out of memory’ warnings. I’ve adjusted the XMX setting to a larger value, but PyCharm’s memory usage keeps increasing, reaching up to 8GB before it freezes. I use a Windows device to remotely access Ubuntu’s PyTorch through SSH, but still encounter this issue. When using an environment without this version installed, the problem doesn’t occur
I assume you are running out of memory on the host so would recommend trying to isolate if this memory usage is expected. I also don’t understand the last statement entirely since you won’t be able to use your GPU at all without a PyTorch build with CUDA 12.8.
What I mean is that when running the same code on my Windows laptop GPU, this issue doesn’t occur. My laptop GPU is a GTX 1650, which only requires 2.5GB of memory to run smoothly.However, when using PyTorch 2.7 version on Ubuntu, the memory usage rapidly increases. Even when not running code, just editing code causes memory to grow. The CPU usage also remains consistently high, as if it’s indexing a large number of files. I’ve examined the log files and found that many dynamic link references have failed. I’ve also tried disabling all plugins, but it had no effect.
This sounds like a issues caused in your IDE. Do you see any of these issues when you are executing your script in a terminal?
No, I don’t have this issue when using the terminal. But I don’t know what’s wrong with the IDE.