Torchvision SSD how to put data on GPU?

Hi,

I am trying to use the SSD implementation on torchvision

As a test I wanted to use the following function from line 522:

def ssd300_vgg16(pretrained: bool = False, progress: bool = True, num_classes: int = 91,
                 pretrained_backbone: bool = True, trainable_backbone_layers: Optional[int] = None, **kwargs: Any):

I have images of different sizes so I am creating a list of tensors for image.

The SSD.py documentation states on line 108:

During training, the model expects both the input tensors, as well as a targets (list of dictionary),
    containing:
        - boxes (``FloatTensor[N, 4]``): the ground-truth boxes in ``[x1, y1, x2, y2]`` format, with
          ``0 <= x1 < x2 <= W`` and ``0 <= y1 < y2 <= H``.
        - labels (Int64Tensor[N]): the class label for each ground-truth box

I had couple of question, I was hoping someone could help:

  • how to put the variable sized images that are currently in a list of tensors to GPU?
  • The function expects list of dictionaries containing bboxes and labels… how do I put this on GPU?

Thank you… any help would be super appreciated.

Hey @ekmungi

You can move your tensors to the GPU before packing them in a list or dict eg:
input_list = [img.cuda() for img, _ in loader]

This tutorial is a good reference TorchVision Object Detection Finetuning Tutorial — PyTorch Tutorials 1.10.1+cu102 documentation for creating your dataset and loader. When packing inputs for your model, use

images = list(image.cuda() for image in images)
targets = [{k: v.cuda() for k, v in t.items()} for t in targets]

Hope this helps!

Thank you Suraj. That worked for me.
The link was very useful. I have an idea on how to modify my dataset :+1:

Anant.