[error] torchvision object detection finetuning tutorial

I used the code directly on Colab environment. What I did is copy the suggested code and replace ‘get_instance_segmentation_model’ method with the copied code.

The following messages are the exact error message that I get.

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2854: UserWarning: The default behavior for interpolate/upsample with float scale_factor will change in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, instead of relying on the computed output size. If you wish to keep the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
/pytorch/torch/csrc/utils/python_arg_parser.cpp:756: UserWarning: This overload of nonzero is deprecated:
	nonzero(Tensor input, *, Tensor out)
Consider using one of the following signatures instead:
	nonzero(Tensor input, *, bool as_tuple)
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-9-1ea96e94502e> in <module>()
      4 for epoch in range(num_epochs):
      5     # train for one epoch, printing every 10 iterations
----> 6     train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
      7     # update the learning rate
      8     lr_scheduler.step()

7 frames
/usr/local/lib/python3.6/dist-packages/torchvision/ops/poolers.py in setup_scales(self, features, image_shapes)
    157         # get the levels in the feature map by leveraging the fact that the network always
    158         # downsamples by a factor of 2 at each level.
--> 159         lvl_min = -torch.log2(torch.tensor(scales[0], dtype=torch.float32)).item()
    160         lvl_max = -torch.log2(torch.tensor(scales[-1], dtype=torch.float32)).item()
    161         self.scales = scales

IndexError: list index out of range

And Recently I find a similar question that has been asked before.

It might be related to the version issue. But I think at least it should have no error on Colab.