Convert pytorch to onnx about the operator mv

background

I want to convert model to ONNX, but there is the mv operator in my model, so when run torch.onnx.export, console output the error:

RuntimeError:exporting the operator mv to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on Pytorch Github

so, I have to implement mv operator for ONNX support.

question

  • how to confirm the operator is an ATen operator.
    This document says that If the operator is an ATen operator, which means you can find the declaration of the function in torch/csrc/autograd/generated/VariableType.h , but in this header file, these is not any operator declaration, if I have the mistaken about the function declaration .

VariableType.h info

#pragma once
// @generated from tools/autograd/templates/VariableType.h
#include <ATen/ATen.h>
#include <c10/util/intrusive_ptr.h>
#include <torch/csrc/WindowsTorchApiMacro.h>
#include <cstdint> // for size_t
#include <functional> // for function
#include <memory> // for unique_ptr
#include <string>
#include <vector>
namespace at {
  struct Quantizer;
};
namespace torch { namespace autograd {
using Variable = at::Tensor;
using at::Context;
using at::Device;
using at::Dimname;
using at::DimnameList;
using at::Generator;
using at::IntArrayRef;
using at::MemoryFormat;
using at::QScheme;
using at::Scalar;
using at::ScalarType;
using at::Storage;
using at::Tensor;
using at::TensorList;
using at::TensorOptions;
using at::Quantizer;
// This is temporary typedef to enable Quantizer in aten native function API
// we'll remove them when we are actually exposing Quantizer class
// to frontend
using ConstQuantizerPtr = const c10::intrusive_ptr<Quantizer>&;
using c10::optional;
namespace VariableType {
  TORCH_API std::vector<at::DeprecatedTypeProperties*> allCUDATypes();
  TORCH_API std::vector<at::DeprecatedTypeProperties*> allCPUTypes();
  at::Tensor & unpack(Tensor & t, const char * name, int pos);
  const at::Tensor & unpack(const Tensor & t, const char * name, int pos);
  at::Tensor unpack_opt(const Tensor & t, const char * name, int pos);
  std::vector<at::Tensor> unpack(at::TensorList tl, const char *name, int pos);
};
}} // namespace torch::autograd

the code for reproduce the error

from torch.autograd import Variable
import torch


class custom_net(torch.nn.Module):

    def __init__(self,):
        super(custom_net, self).__init__()
        self.sigmoid = torch.nn.Sigmoid()

    def forward(self, martix, vector):
        martix = self.sigmoid(martix)
        vector = self.sigmoid(vector)
        outputs = torch.mv(martix, vector)
        return outputs


if __name__ == "__main__":

    net = custom_net()
    net = net.eval().cuda()
    martix = Variable(torch.randn(2, 3)).type(torch.FloatTensor)
    vector = Variable(torch.randn(3)).type(torch.FloatTensor)

    martix = martix.cuda()
    vector = vector.cuda()

    input_name = ["martix", "vector"]
    output_name = ["output"]

    torch.onnx.export(net, (martix, vector), "net.onnx", input_names=input_name, output_names=output_name,
                      opset_version=11)
    print("done!")
1 Like

Same question, did you solve it yet? @nick_zhang

1 Like

Does anyone have an solution?

哥们,解决了没,有什么方法????????

From Google translate:

Dude, have you solved it, is there any way?

@li_he please use an online translation service so that other can also help.