Best environment for deeplearning on a local GPU workstation

Hi everyone,

I recently built a local workstation for deep learning research, and I’d like to fully utilize my GPU with PyTorch instead of using Google Colab.

Specs

  • CPU: Intel i9-11900K (8 cores, 11th Gen)
  • GPU: NVIDIA RTX 5070 (12 GB VRAM)
  • RAM: 128 GB DDR5
  • SSD: 2 TB NVMe
  • OS: Windows 11 / Ubuntu WSL2 (optional)

Goal
To train deep learning models (CNNs, Transformers) with PyTorch using full GPU acceleration.

Tools I’m considering

  • JupyterLab
  • VS Code
  • (others welcome)

Question
Which environment is most suitable for PyTorch GPU development on a local workstation?

(I’m a deep learning beginner, but my professor asked me to check this setup, so I’m trying to find the best environment before we start our experiments.)

Thanks in advance for any insights or setup recommendations!

You may want to elaborate on this.

You can train such models on consumer GPUs, but the size of the models will always be limited by the VRAM. If it’s for protoypying or simply for learning, your setup is probably fine. Well, you should definitely go for the 5070 Ti as it comes with 16 GB VRAM.

But again, particularly when you consider training / fine-tuning Transformer-based LLMs, 16 GB VRAM is likely to be very quickly a limiting factor.

I’m a big fan of PyCharm. If you’re used to Windows interfaces, it’s very intuitive. Plus the integrated code debugger makes finding most oversights in your code a breeze.