CUDA getting out of memory

After training the model, I was going to test the model by inputting one image. But I don’t know why Google Colab GPU is getting out of memory. Below is the code
model = Yolov1(split_size = 7,num_boxes = 2,num_classes = 2).to(device)
model.load_state_dict(torch.load("/content/model.pth"))
model.eval()

img = cv2.imread("/content/1589731926566.jpg",cv2.COLOR_BGR2RGB)

image = Image.open("/content/1589731926566.jpg")
image.resize((448,448))
l = ToTensor()
image = l(image)
image = image.unsqueeze(0)
image = image.to(device)

img = cv2.resize(img,(448,448))

Here img is for plotting bounding boxes and image is for inputting in model

GPU crashes after this line
y = model(image)

Please suggest anything

P.S. Please tell any shortcut so that whole paragraph converts each line in this format `` at once

Thank You

Do you know how much memory your instance has? It could be that there isn’t enough memory to run the model while keeping intermediate activations in memory.

If you are just trying inference for now, you could try wrapping the model execution e.g, torch.no_grad() e.g.,

with torch.no_grad():
    y = model(image)

which should hopefully save some memory as intermediate activations will no longer be kept in memory.

You can use three backticks “```” at the beginning and end of a code block to format it.

Thank you very much! It worked

Thank for the help :innocent: :innocent: