TMUX and GPU memeory occupancy issues

I used tmux new-session to create a tmux process and then run my PyTorch code on GPU using CUDA_VISIBLE_DEVICES=0 CUBLAS_WORKSPACE_CONFIG=:16:8 python After using nvidia-smi to check the GPUs’ status, I noticed that some of the GPUs memories are still allocated to the tmux processes, despite the fact that they were finished running. If I use ctrl+c to terminate a tmux process, the memory gets freed up. If I kill the tmux processes, the memory releases too. I don’t get what the issue is. Is there a way to guarantee that when a tmux process is finished, the GPU memory is fully released again?

Based on your description it seems that tmux might keep the Python process alive for some reason.
Do you still see the python process via ps aux in the tmux session?

Thank you for the reply. Could you explain a bit further how I should check this? Should I go inside the tumx session and look for the pids?

Yes, that’s how I would check for running processes and see if something still uses GPU memory.