Memory usage understanding

Hello all,

I’m using Pytorch for awhile and I’m wondering about the memory usage.
The case is simple, 1 windows machine with 1 GPU.
When I’m training a model I see that the RAM and the GPU usage increase but not to their full potential. GPU doesn’t pass the 10% of usage for example.

To have a faster training session, would make it sense to push the hardware to the limit? And if so, how?
It is just a matter of putting more batches?
Are any setup settings from Pytorch controlling this?

This is my first post so any formatting issues regarding the question please let me know.

Best