Lstm model cannot run on gpu using libtorch, cnn is ok

I use libtorch2.3.1+cuda12.1+vs2022 on windows11, and I convert two models, cnn and lstm, from python with torch.jit.trace. The cnn model can run normally on GPU, the lstm model cannot, only can run on cpu. Why?

and I found that once the model contain the lstm, it cann’t run on gpu in vs c++ environment.

Are you about just using/calling nn.LSTM or a complete training?

For example, compared to a CNN, you typically need to initialize a hidden state for the LSTM. If that’s true in your case, do you move this hidden state to the same service as your model?

Thanks. I tried initial the LSMT’s hidden state, but it does work.
I check my python code, I found the problem. When I transfer the model from python using the torch.jit.trace, I also send the model.to(“cpu”), and this is the problem. I change it to model.to(“cuda”), it is ok for GPU run.
But for cnn, the problem doesnot exist, the model.to(“cpu”) and torch.jit.trace transfer the model also can run on GPU.