Convert TSM pytorch model to onnx

I want to convert tsm model https://github.com/mit-han-lab/temporal-shift-module to onnx. when convert it show me that

TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out[:, :-1, :fold] = x[:, 1:, :fold]  # shift left
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:37: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
  out[:, :-1, :fold] = x[:, 1:, :fold]  # shift left
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:38: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold]  # shift right
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:38: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
  out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold]  # shift right
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:39: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out[:, :, 2 * fold:] = x[:, :, 2 * fold:]  # not shift
/Users/zhanghongxing/vscode_project/tsm_inference/infer/../ops/temporal_shift.py:39: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
  out[:, :, 2 * fold:] = x[:, :, 2 * fold:]  # not shift

this is the code snippet:

            out = torch.zeros_like(x)
            out[:, :-1, :fold] = x[:, 1:, :fold]  # shift left
            out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold]  # shift right
            out[:, :, 2 * fold:] = x[:, :, 2 * fold:]  # not shift

i dont think this is a inplace operate.
and then I get the onnx model .but the output is different with the pytorch output.
pytorch output:[[0.04369894 0.09070115 0.8193747 … 0.25200492 0.0048383 0.04694336]]
onnx output:[0. 0. 0. … 0.03162321 0. 0. ]]