Torch.onnx.export() - No ONNX function for OpOverload

Hi there!
I am by far not the most experienced PyTorch User and some errors have occurred lately.
I want to export a model with some custom implementations using torch.onnx.export().
I use functions such as torch.linalg.pinv() for the Pseudo Inverse and others.

When trying to export the model, I get error messages. I have been able to identify the fact, that ONNX is yet to support some linear algebra operations such as pinv() that I use.
The official PyTorch Doc has information about exactly that.
https://pytorch.org/docs/stable/onnx_torchscript.html#example-alexnet-from-pytorch-to-onnx

I am not scared of implementing something myself and learning new stuff. Hence I implemented the missing code in the symbolic_opset20.py file as mentioned.
But it did not change anything and the error is still present.

I guess the problem is actually not the operation missing but something that happens inside the operation or with my implementation?
Because it does not tell me that the operation is missing but that some OpOverload fails / is missing.
Would someone maybe have an idea on how to tackle this?
I tried replacing the pseudo inverse command with SVD but that uses operations that are also not implemented.

Error Message:
<class ‘torch.onnx._internal.exporter._errors.DispatchError’>: No ONNX function found for <OpOverload(op=‘aten.linalg_pinv’, overload=‘atol_rtol_tensor’)>. Failure message: No decompositions registered for the real-valued input
:arrow_up:
<class ‘torch.onnx._internal.exporter._errors.ConversionError’>: Error when translating node %linalg_pinv : [num_users=1] = call_function[target=torch.ops.aten.linalg_pinv.atol_rtol_tensor](args = (%stack,), kwargs = {}). See the stack trace for more information.

Seems inverse operator hasn’t made it to onnx spec.
Though there is some ops implemented in onnxruntime as contrib_ops:

The matrix inverse wouldn’t be the problem i guess but pinverse is actually the pseudo inverse which is still a little bit different to it.
I am pretty certain, that the error isn’t even yet the non-existing implementation of it but rather some other / deeper problem since it has some problem with the atl_rtol_tensor overload.
However, if I pass atol and rtol as keyword arguments it still will throw errors and won’t work.