How do I normalize a 5D tensor in Pytorch

I am trying to feed a 5-dimensional tensor (after extracting features from a custom feature extractor) as input to a Faster RCNN Network to train an object detection model. However, I am facing an error in normalizing the input. My code is as follows

num_classes = 5
model = fasterrcnn_resnet50_fpn(pretrained=False)
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.backbone.body.conv1 = Conv2d(5, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) # to account for 5 image
model.roi_heads.box_predictor = FastRCNNPredictor(in_features,num_classes)

# For testing to see if the input is accepted by the network
img = torch.randn([1,5,100,200])
model.eval()
output = model(img)

I get the following error,

Traceback (most recent call last):
  File "temp.py", line 28, in <module>
    output = model(img)
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torchvision/models/detection/generalized_rcnn.py", line 47, in forward
    images, targets = self.transform(images, targets)
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torchvision/models/detection/transform.py", line 40, in forward
    image = self.normalize(image)
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torchvision/models/detection/transform.py", line 55, in normalize
    return (image - mean[:, None, None]) / std[:, None, None]
RuntimeError: The size of tensor a (5) must match the size of tensor b (3) at non-singleton dimension 0

I am able to understand that it is because of my input size being 5 instead of 3. So, I went ahead and modified the code in transform.py(line 55) function to the following

return (image - mean[:, None, None,None, None]) / std[:, None, None,None, None]

This produced another error

File "temp.py", line 28, in <module>
    output = model(img)
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torchvision/models/detection/generalized_rcnn.py", line 47, in forward
    images, targets = self.transform(images, targets)
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torchvision/models/detection/transform.py", line 41, in forward
    image, target = self.resize(image, target)
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torchvision/models/detection/transform.py", line 70, in resize
    image[None], scale_factor=scale_factor, mode='bilinear', align_corners=False)[0]
  File "/home/jitesh/anaconda3/envs/pytorch_test_env/lib/python3.7/site-packages/torch/nn/functional.py", line 2517, in interpolate
    " (got {})".format(input.dim(), mode))
NotImplementedError: Input Error: Only 3D, 4D and 5D input Tensors supported (got 6D) for the modes: nearest | linear | bilinear | bicubic | trilinear (got bilinear)

Is there any workaround for this ? Or is it okay if I uncomment the part of code
image = self.normalize(image) in transform.py since it is not actually an image but a feature which has been extracted ?