GPU memory can't be freed by launching code from VS Code

The symptom is like this: The training starts, and first several iterations run well, but the GPU memory increase linearly, and finally I get out of resource error. And any memory release technics won’t work, del, detach(), empty_cache(), etc. I thought it was because I was using Intel’s GPU, which might had not adapted to PyTorch properly. Finally, I try to run it from PowerShell instead of VS Code, and everything run well!