Torchvision Object Detection Finetuning Tutorial

I am trying to implement my version of the Pytorch Object Detection Finetuning code with my own data. The link to the tutorial can be found here.

I basically use all the same dataset, data loader, model architecture etc. code. Unlike the tutorial, my images are jpegs.

When I run the prewritten training loop, I receive an error that the PIL ‘Image’ object has no attribute ‘to’. Why am I receiving this error and how can I fix it? I hardly modified the existing tutorial code.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-71-1f6e682855f7> in <module>
      4 for epoch in range(num_epochs):
      5     # train for one epoch, printing every 10 iterations
----> 6     train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
      7     # update the learning rate
      8     lr_scheduler.step()

~/Image Project/Code/pytorch/engine.py in train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq)
     25 
     26     for images, targets in metric_logger.log_every(data_loader, print_freq, header):
---> 27         images = list(image.to(device) for image in images)
     28         targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
     29 

~/Image Project/Code/pytorch/engine.py in <genexpr>(.0)
     25 
     26     for images, targets in metric_logger.log_every(data_loader, print_freq, header):
---> 27         images = list(image.to(device) for image in images)
     28         targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
     29 

AttributeError: 'Image' object has no attribute 'to'

The train_one_epoch() function:


    def train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq):
    model.train()
    metric_logger = utils.MetricLogger(delimiter="  ")
    metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}'))
    header = 'Epoch: [{}]'.format(epoch)

    lr_scheduler = None
    if epoch == 0:
        warmup_factor = 1. / 1000
        warmup_iters = min(1000, len(data_loader) - 1)

        lr_scheduler = utils.warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor)

    for images, targets in metric_logger.log_every(data_loader, print_freq, header):
        images = list(image.to(device) for image in images)
        targets = [{k: v.to(device) for k, v in t.items()} for t in targets]

        loss_dict = model(images, targets)

        losses = sum(loss for loss in loss_dict.values())

        # reduce losses over all GPUs for logging purposes
        loss_dict_reduced = utils.reduce_dict(loss_dict)
        losses_reduced = sum(loss for loss in loss_dict_reduced.values())

        loss_value = losses_reduced.item()

        if not math.isfinite(loss_value):
            print("Loss is {}, stopping training".format(loss_value))
            print(loss_dict_reduced)
            sys.exit(1)

        optimizer.zero_grad()
        losses.backward()
        optimizer.step()

        if lr_scheduler is not None:
            lr_scheduler.step()

        metric_logger.update(loss=losses_reduced, **loss_dict_reduced)
        metric_logger.update(lr=optimizer.param_groups[0]["lr"])

Hi, did you fixed that?

I met the same issue, and found the answer here: