Hi everyone, sorry for being late to the party. I think there are two broad questions that were asked in this thread:
-
How can I move my
script::Module
to CUDA? We found that there indeed was no easy way to do this without iterating over parameters yourself, so I went a head and implementedscript::Module::to(...)
in https://github.com/pytorch/pytorch/pull/12710. We’ll try to land it today or tomorrow. -
Some of you noticed that the
torch::nn::Module
class from the C++ frontend has ato(...)
method, and you were wondering whether you could mixtorch::nn::Module
andscript::Module
. At the moment, there is a strict division between thetorch::nn::Module
, which is for the C++ frontend (the pure C++ alternative to the Python eager frontend), andscript::Module
(the C++ module class for TorchScript). They may not be mixed at the moment. Thetorch::nn::Module
class is currently friendlier to use because it’s meant to provide the same API astorch.nn.Module
in Python, for research. We are working actively on blending the TorchScript C++ API with the C++ frontend API, so I would expecttorch::nn::Module
andscript::Module
to become largely the same in the next few months. Feel free to raise more issues for operations you’d like to do onscript::Module
that are currently hard to do, and we can send more patches.
Hope this helps and let me know if you have more questions.