(Mat)Mul layers don't convert well to ONNX models

So I have 3 PyTorch models where 2 of them give me trouble when loading in ONNX, the inference works fine in PyTorch and the conversion is all ok too.

So in the first model I get this error when reading the ONNX model in OpenCV:

Assertion failed (inputs.size()) in cv::dnn::dnn4_v20190122::Layer::getMemoryShapese

With the other model I get this error:

Blob 64 not found in const blobs

This has all something to do with the input where in the first error the inputs.size() is 0 and in the second model the blob isn’t big enough somehow.

I just don’t know where the error lies, as the PyTorch model alone is valid, I suppose it’s either in the PyTorch -> ONNX model conversion or in OpenCV itself.

Here is the project including the first onnx model file https://github.com/Aeroxander/decodererror/