GPU memory utilization is low

Hi I am running some fine tuning code with Titan X (pascal), cuda 8.0, cudnn 5.1. And I set cudnn.benchmark = True in the code. Somehow when I use nvidia-smi to check the GPU, it says only around 1G GPU memory is used. How can I access more GPU memory? I am using python 2.7.5. I do not think using python 3 would help… But any suggestion?

P.S. is there a mechanism in pytorch to set the ratio of GPU memory that I want to allocate? just like in tensorflow

Unlike TF, PyTorch will only allocate as much memory as it needs. If you want to make use of more memory, increase the batch size.

Thank you so much. I was having OOM error before somehow.