How to hold gpu memory?

hi
I am sharing GPUs with other people. The thing is during training, the batch size is small, which taks 1G gpu memory while during validation, the batch size is large, which takes 5G. The problem is that during training, other people may use the left memory, and my task gets out of memory when it goes to validation .
So, is it possible I specify the gpu memory usage for my task during the whole process and keep it 5G all the time.

thank you

Hi,

This has already been answered here for example.

wonderful ! thank you very much.