Memory leakage with pyTorch DQN tutorial

Hi,
I tried to run the following example
https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
I keep getting unexpected termination (either on google co-lab, juypter notebook, local) after only 7 episodes. I looked further and I highly suspect this is due to high memory leakage (i.e. I checked and this tutorial can cause 3GB memory usage for just 7 episodes?) I also tried using my GPU (GeForce GTX1650 with CUDA 11.0) but no luck

Here is the list of what i installed:
OS: Windows 10
python: 3.7.3
torch: 1.4.0+cu92
torchvision: 0.5.0
gym: 0.17.2
gym-minigrid: 1.0.1
numpy: 1.18.5

Has anyone encountered this? Sorry if this question is too trivial as I am still new to pyTorch community. I am currently stuck in learning this tutorial. Hope someone can give me a hand :*(

It’s much more likely that you don’t have enough memory in your GPU to run a batch size of 128. Try changing to BATCH_SIZE = 10 and see it it works. Then empirically find the highest batch size that your system can support.

Thanks @Alon. Yes you are right. Indeed it is the limitation of my GPU, GTX 1650, which does not allow to run Deep learning.

I guess I will have to buy another one…