Load torch-script to do forward() in CUDA with C++ API, but throwing script::ErrorReport in forward()

Questions and Help
I loaded a torch-script mode_cuda.pt to do forward() in CUDA mode with C++ API, but throwing ‘torch::jit::script::ErrorReport’ in forward().

(1) mode_cuda.pt is created by pytroch-nightly-py3.6_cuda9.0.176_cudnn7.1.2_0 and CUDA9.0

(2)model has been converted to cuda mode by pModel->to(torch::kCUDA)

(3)input tensor has been converted to cuda mode by tensorAll .toBackend(torch::Backend::CUDA)

there is detail information of ‘torch::jit::script::ErrorReport


todo forward()
====================bug information=======================
terminate called after throwing an instance of ‘torch::jit::script::ErrorReport’
what():
Schema not found for node. File a bug report.
Node: %18 : Dynamic = aten::to(%0, %16, %17)

Input types:Float(*, *, *, *), int[], bool
candidates were:
aten::to(Tensor self, Tensor other, bool non_blocking=, bool copy=) -> Tensor
aten::to(Tensor self, int dtype, bool non_blocking=, bool copy=) -> Tensor
aten::to(Tensor self, int[] device, bool non_blocking=, bool copy=) -> Tensor
aten::to(Tensor self, int[] device, int dtype, bool non_blocking=, bool copy=) -> Tensor
.

Aborted (core dumped)
====================bug information=======================

could anynone know why and know how to debug it?
thanks