TypeError: 'TopLevelTracedModule' object is not iterable

Im new to pytorch programming. I have a model in python there checkpoint file is saved in.ckpt format. I want to save that model in the .pt format ( use in c++ ) . The model uses two images as input. Then i sent two images using jit:: trace into model. I get following errors any help is appreciated. Thank you.

The model in python

left = cv2.imread(args.left)
right = cv2.imread(args.right)

pairs = {'left': left, 'right': right}

transform = T.Compose([Normalize(mean, std), ToTensor(), Pad(384, 1248)])
pairs = transform(pairs)
left = pairs['left'].to(device).unsqueeze(0)
right = pairs['right'].to(device).unsqueeze(0)

model = PSMNet(args.maxdisp).to(device)
if len(device_ids) > 1:
    model = nn.DataParallel(model, device_ids=device_ids)

state = torch.load(args.model_path)
if len(device_ids) == 1:
    from collections import OrderedDict
    new_state_dict = OrderedDict()
    for k, v in state['state_dict'].items():
        namekey = k[7:] # remove `module.`
        new_state_dict[namekey] = v
    state['state_dict'] = new_state_dict

model.load_state_dict(state['state_dict'])
print('load model from {}'.format(args.model_path))
print('epoch: {}'.format(state['epoch']))
print('3px-error: {}%'.format(state['error']))

model.eval()

with torch.no_grad():
    _, _, disp = model(left, right)]

The Trace program that i tried to save model file in .pt format

leftTest = torch.randn(3, 384, 1248).to(device).unsqueeze(0)
rightTest = torch.randn(3, 384, 1248).to(device).unsqueeze(0)

with torch.no_grad():
# error line 
    _, _, dispTest = torch.jit.trace(model, (leftTest, rightTest)) 

where the problem is

def forward(self, left_img, right_img):

    original_size = [self.D, left_img.size(2), left_img.size(3)]
    left_cost = self.cost_net(left_img)  # [B, 32, 1/4H, 1/4W]
    right_cost = self.cost_net(right_img)  # [B, 32, 1/4H, 1/4W]
    # cost = torch.cat([left_cost, right_cost], dim=1)  # [B, 64, 1/4H, 1/4W]
    # B, C, H, W = cost.size()

    # print('left_cost')
    # print(left_cost[0, 0, :3, :3])

    B, C, H, W = left_cost.size()

    cost_volume = torch.zeros(B, C * 2, self.D // 4, H, W).type_as(left_cost)  # [B, 64, D, 1/4H, 1/4W]

    # for i in range(self.D // 4):
    #     cost_volume[:, :, i, :, i:] = cost[:, :, :, i:]

    for i in range(self.D // 4):
        if i > 0:
            cost_volume[:, :C, i, :, i:] = left_cost[:, :, :, i:] # use 32 
            cost_volume[:, C:, i, :, i:] = right_cost[:, :, :, :-i] # use 32
        else:
            # come at first
            cost_volume[:, :C, i, :, :] = left_cost # use 32
            cost_volume[:, C:, i, :, :] = right_cost

    disp1, disp2, disp3 = self.stackedhourglass(cost_volume, out_size=original_size)

    return disp1, disp2, disp3

Error log that i get

/home/ven/.local/lib/python3.5/site-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=trilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  "See the documentation of nn.Upsample for details.".format(mode))
save diparity map in /home/ven/Downloads/PSMNet/depth.png
shape left ----> torch.Size([1, 3, 384, 1248])
shape right ----> torch.Size([1, 3, 384, 1248])
/home/ven/Downloads/PSMNet/models/PSMnet.py:42: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  cost_volume[:, :C, i, :, :] = left_cost
/home/ven/Downloads/PSMNet/models/PSMnet.py:43: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  cost_volume[:, C:, i, :, :] = right_cost
/home/ven/Downloads/PSMNet/models/PSMnet.py:39: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  cost_volume[:, :C, i, :, i:] = left_cost[:, :, :, i:]
/home/ven/Downloads/PSMNet/models/PSMnet.py:40: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  cost_volume[:, C:, i, :, i:] = right_cost[:, :, :, :-i]
/home/ven/.local/lib/python3.5/site-packages/torch/jit/__init__.py:702: TracerWarning: Output nr 3. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 282, 783] (68.55914306640625 vs. 68.55826568603516) and 42 other locations (0.00%)
  _check_trace([example_inputs], func, executor_options, traced, check_tolerance, _force_outplace)
Traceback (most recent call last):
  File "/home/ven/Downloads/PSMNet/inference.py", line 118, in <module>
    main()
  File "/home/ven/Downloads/PSMNet/inference.py", line 85, in main
    _, _, disp = torch.jit.trace(model, (leftTest, rightTest))
TypeError: 'TopLevelTracedModule' object is not iterable

solved. By changing
dispTest = torch.jit.trace(model, (leftTest, rightTest))
to B, C, H, W assigned corresponding numerical values

1 Like