TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_

Hi there,

I am trying to convert a pytorch model to onnx and use tensorrt to optimize it. I can convert it to onnx and i have generated a trt engine but the output is completely useless. The onnx engine creation is what i suspect is the issue here. I do have a lot of tracerwarning of which most are of this sort:

TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
pred_boxes[:, :, 0::4] = pred_ctr_x - 0.5 * pred_w

From what i understand, slicing and assigning is the issue here?
How would i go about fixing this, (Also, am i right to assume this is the issue.).

the rest of the warnings are of the sort:
TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!

which i dont think is the issue as of now, because i am using the same shape inputs.

I believe this should work for the _copy (at assignment) warning :


There cannot be any in-place assignment, like:

pred[:, :, 0] = torch.sigmoid(pred[:, :, 0])

I think that you need to modify it as:


pred = torch.cat((torch.sigmoid(pred[:, :, 0:1]), pred[:, :, 1:]), dim=2)


Anyway, slicing cannot occur on the left side of “=”

But how do I fix
TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator clamp_. This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
boxes[i,:,3::4].clamp_(0, im_shape[i, 0]-1)

Hi,

This is a limitation of onnx I guess: they do not support inplace operations.

You will need to use cat here as well and use the not-inplace version of .clamp().

1 Like

Thanks for the input. By non inplace clamp, do you mean using the out key word argument in torch clamp?

I mean the function without the _. Which won’t modify the input inplace but return a new Tensor containing the result. See the doc here.

1 Like

Hey, I also met some tracerwarnings that are same with yours and I know that you solved this warning Could you tell me how to solve this problem?