RuntimeError: CUDA out of memory with resizing the image

i got this error

  File "fin.py", line 80, in extract_features
    output = model(tensor_on_device)
  File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/model.py", line 238, in forward
    x = self.forward_features(x)
  File "/model.py", line 229, in forward_features
    x = blk(x, H, W)
  File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/model.py", line 97, in forward
    x = x + self.drop_path(self.mlp(self.norm2(x)))
  File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/model.py", line 26, in forward
    x = self.fc1(x)
  File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 94, in forward
    return F.linear(input, self.weight, self.bias)
  File "/home/user/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1753, in linear
    return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 5.80 GiB total capacity; 4.12 GiB already allocated; 4.25 MiB free; 4.13 GiB reserved in total by PyTorch)

the problem here in resize… every time I reduce resize … the code keeps working more … but stop and didn’t complete all images I have… I tried to set resize to 32 and reach at 2124 images only … How can i solve it ?

inference_transform = T.Compose(
      [

         T.Resize(64),

         T.ToTensor(),

         T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), 

      ]
    )

i set model work on Cuda and tried also CPU but got killed in the middle of run

I don’t think the issue is caused by Resize, but seems to be delayed for smaller shapes.

This indicates that your script is increasing the memory usage in each iteration, which could be caused by e.g. storing the model output or loss in a list without detaching them.
Check the memory usage via nvidia-smi or torch.cuda.memory_summar() in each iteration and you would most likely see an increase in each step.