Pytorch model and exported ONNX differs in inference

I exported Pytorch model to ONNX, and created TensorRT engine from this ONNX. But Pytorch model and TensorRT engine produce different results given the same input data.
(This is exactly the same data I passed to the torch.onnx.export(…) function)

Can you describe shortly, how Pytorch exporter creates ONNX graph?

it first does the equivalent of torch.jit.trace(), which executes the model once with the given args and records all operations that happen during that execution

How does exporter record operations? In my case model have some operations, implemented in C++, how exporter can record these operations?