Allocate GPU and CPU Memory When Load Model on CUDA

While installing pytorch models with the gpu option, I see about 4 GB usage in cpu ram. It already uses 2.5 GB from my gpu memory. I could never understand the reason. My codes and ram information are below.

# Import Libraries
import torch
import os
from torchvision.models.segmentation import deeplabv3_resnet50
from flask import Flask
import torch.backends.cudnn as cudnn


os.environ['LRU_CACHE_CAPACITY'] = '1'

# Initialize Flask
app = Flask(__name__)

device = 'cuda' if torch.cuda.is_available() else 'cpu'

def load_deskew_model(num_classes=2, device='cuda', model_path=None):
    model = deeplabv3_resnet50(num_classes=num_classes, aux_loss=True)
    model.to(device)
    checkpoint_path = os.path.join(os.getcwd(), model_path)
    checkpoints = torch.load(checkpoint_path, map_location=lambda storage, loc: storage.cuda(0))
    model.load_state_dict(checkpoints, strict=False)
    return model

±----------------------------------------------------------------------------+
| NVIDIA-SMI 470.129.06 Driver Version: 470.129.06 CUDA Version: 11.4 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … Off | 00000000:01:00.0 Off | N/A |
| 33% 29C P2 44W / 220W | 2242MiB / 7982MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1087 G /usr/lib/xorg/Xorg 9MiB |
| 0 N/A N/A 1249 G /usr/bin/gnome-shell 3MiB |
| 0 N/A N/A 3622956 C python3 2225MiB |
±----------------------------------------------------------------------------+