i’am trying to convert my model with torchscript and i get the following warning:
TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[4, 0, 35, 33] (0.406830757856369 vs. 0.40629732608795166) and 6114 other locations (18.00%)
check_tolerance, _force_outplace, True, _module_class)
The code is this
model=torch.load("...\\Model\\version1\\train0\\gray\\ConvAuto750.pth") model.eval() model.cuda() example = torch.rand(8, 1, 64, 64).cuda() traced_script_module = torch.jit.trace(model, example)
I want to use a batch size greater than one to speed up inference.
The strange thing is that if i use the model with default weights initialization parameters the above warning not appear even with batch_size greater than one.
This is the code:
model= simple_autoPrelu() model.eval() model.cuda() example = torch.rand(8, 1, 64, 64).cuda() traced_script_module = torch.jit.trace(model, example)
of course the model that i load has the same architecture as the one below. The only difference is that i have trained it for 750 epochs.
If i change the batch size from 8 to 1 the warning not appears.
Could someone provide me some explanations or some way on how to solve this warning?