ResNet model did not use GPU memory size as expected

I’m using torchvision.models.resnet50 to do some experiment. But I found that the memory size used by by the model on GPU is less than it is expected to be.
The following is my code:

    import torch
    import torchvision
    from torchsummary import summary
    import numpy as np

    model = torchvision.models.resnet50(pretrained=True)
    model = model.cuda()
    summary(model, (3, 448, 448), 24)
    input = torch.tensor(np.random.random((24, 3, 448, 448)), dtype=torch.float32)
    input = input.cuda()
    while True:
        model(input)

When I run this code, the summary function tell me that, total memory to be used is about 27GB, including parameters and forward/backward pass size. I think this is nearly to the expected size according to resnet50’s network.
But, when the model was transformed to GPU and run training, I found the GPU memory usage was only kept about 9GB.
Is this normal? I’m worried about the GPU is not doing whole computing for my training job.
Could someone give me some help? Thanks!

Hi, have you find the solution, I have a similar question to your’s. The expected size and actual size is not consistent, the latter is almost half of the former.