Error when loading pretrained layers in fasterRCNN torchvision

Hi, I’m trying to use pretrained FasterRCNN network provided in torchvision.
model=torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
doing
model.eval()
model([image_tensor])
working perfectly fine but when I use sequential to stop at intermediate layers like this
model2=nn.Sequential(*list(model.children())[:-2])
to use until FPN network I’m getting error when passing a image as show below
TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not tuple

I want this because I want to get Region Proposal Network output from faster rcnn for another model training. Thanks in advance for the help

@fmassa can you help me with this?

@fmassa I have adopted code from https://github.com/pytorch/vision/blob/master/torchvision/models/detection/faster_rcnn.py and implemented
https://github.com/nithinraok/VisualQuestion_VQA/blob/master/common_resources/rpn_test_2.ipynb to get RPN output. But unfortunately I am unable to get Kx2048 vector output. Instead I am getting 14x4. Can you please help me with this?