How can i solve my "out of Gpu memory"problem on training

i try to use pre-trained maskrcnn_resnet50_fpn for my dataset .First of all i run this whole code in colab.İt is working on google colab because they have enough gpu memory.But i dont have that much gpu memory.I have 6 gb nvidia rtx 2060.When i run this code on my computer its says out of gpu error on training.How can i solve this problem? i tried to decrease batch size but as you can see its already 2 and it does not solve my problem .

Try using mixed precision, you need pytorch-nightly to import their amp module.

Thanks for the answer.I try to implement what you are saying to this algorithm on colab.But i can’t be able to implement that can you guide me.
For example where should i write the

optimizer.zero_grad()

line.

If you open the link which i send you and if you read the code and tell me that what should i do.

Thanks in advance

Looking at the code you are trying to run, optimizer.zero_grad() is probably located in engine/train_one_epoch and you will also need to edit the forward function in the actual model.
See Automatic Mixed Precision examples — PyTorch 1.7.0 documentation for use case examples.

I want to point out that it might be difficult to correctly enable AMP if you are unfamiliar with the code you are trying to run and it is not guaranteed that you can fit the model and data into GPU mem even after enabling it.

Another course of action could therefore be to add a transform to downscale the images you are training the model on in get_transform. There you could append this: torchvision.transforms — PyTorch 1.7.0 documentation and experiment a bit with sizes. By downscaling the size of the image it is possible that you could run the model on your graphic card.