Type mismatch occurs in C++ code with torchscript

Hello, I tried to generate torch model and use it in C++ code.
Since I want to set dtype of all variables in model to double, I use torch.set_default_dtype(torch.float64) in my python code for generating torchscript file.
And I create C++ variable with dtype double:

torch::Tensor i_tensor = torch::zeros({3,10}, torch::dtype(torch::kFloat64));
auto i_tensor_a = i_tensor.accessor<double,2>();

But I face the error like below:

terminate called after throwing an instance of 'std::runtime_error'
  what():  The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
  File "<string>", line 3, in <foward op>

      def addmm(self: Tensor, mat1: Tensor, mat2: Tensor, beta: number = 1.0, alpha: number = 1.0):
          return self + mat1.mm(mat2)
                        ~~~~~~~ <--- HERE

      def batch_norm(input : Tensor, running_mean : Optional[Tensor], running_var : Optional[Tensor], training : bool, momentum : float, eps : float) -> Tensor:
RuntimeError: Expected object of scalar type Float but got scalar type Double for argument #3 'mat2' in call to _th_addmm_out

How can I fix it?

Note: It works well when I change double to float in C++ code and remove the line contains set_default_dtype

I solve this issue and leave this thread to someone who face same problem.

The reason of error is tensors generated in the model.
Dtype setting in python code with torch.set_default_dtype() is not saved in torchscript.
Thus, one need to set dtype explicitly to avoid type mismatch issue.
But torch.set_default_dtype() is still useful because it automatically set the dtype of network weights, biases, etc.