About libtorch "model.to(device)"

Hi, I got a question when I Debug my c++ code.
The code is just like following:

torch::DeviceType device_type;
device_type = torch::kCUDA;
torch::Device device(device_type);
…………
auto img_var = torch::from_blob(mRGB.data, dims, torch::kFloat32).to(device);
…………
modle_vlad.to(device);
modle_vlad.eval();

And the error is about “modle_vlad.to(device);”

  what():  _ivalue_ INTERNAL ASSERT FAILED at "../torch/csrc/jit/api/object.cpp":19, please report a bug to PyTorch. 
Exception raised from _ivalue at ../torch/csrc/jit/api/object.cpp:19 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x69 (0x7fff074b9b29 in /usr/local/libtorch1.8.2/libtorch/lib/libc10.so)
frame #1: torch::jit::Object::_ivalue() const + 0x2c8 (0x7fff8435dc38 in /usr/local/libtorch1.8.2/libtorch/lib/libtorch_cpu.so)
frame #2: <unknown function> + 0x3484f38 (0x7fff8435cf38 in /usr/local/libtorch1.8.2/libtorch/lib/libtorch_cpu.so)
frame #3: torch::jit::Module::to_impl(c10::optional<c10::Device> const&, c10::optional<c10::ScalarType> const&, bool) + 0x15a (0x7fff8434d28a in /usr/local/libtorch1.8.2/libtorch/lib/libtorch_cpu.so)
frame #4: torch::jit::Module::to(c10::Device, bool) + 0x37 (0x7fff8434e377 in /usr/local/libtorch1.8.2/libtorch/lib/libtorch_cpu.so)
…………

It is strange that I have no difficulty to transform the cv::Mat to GPU tensor , but it shows that some errors ocurr when I try to transform my model to GPU. I used the same code to transform other model to GPU a few days before, and it had no problem.
Could you please help me with the question?

In case you are seeing this error using the latest release, could you please create an issue on GitHub with a minimal, executable code snippet to reproduce the issue, please?