Pytorch 0.4 takes up lots of memory but not 0.3

I’m trying to convert a model that works on pytorch 0.3 to 0.4.1. However it seems to be taking up lots of memory both during training and evaluation in 0.4.1.

This is the code:

As I step through the code at around line 1636, memory usage surges:

    [p2_out, p3_out, p4_out, p5_out, p6_out] = self.fpn(molded_images)

    # Note that P6 is used in RPN, but not in the classifier heads.
    rpn_feature_maps = [p2_out, p3_out, p4_out, p5_out, p6_out]
    mrcnn_feature_maps = [p2_out, p3_out, p4_out, p5_out]

For pytorch 0.4 I’ve replaced “F.upsample()” with “F.interpolate()”. Does anyone know the cause of this memory usage surge?