How to move model to double/float

Pretty simple question. I’ve been scanning the docs and all I’ve found is this. The problem I have is that all my searching hasn’t yielded how to pass at::ScalarType to that function or even what that even is.

I’ve tried passing “double,” a double variable, doing .double(), initializing an instance of at::ScalarType, and so on and just haven’t figure it out. I need to set to double to match how I’ve trained the model in my python script and because I enjoy the higher precision.

If you want code examples I can post some, but I think the issue is clear enough?
Thanks!

1 Like

@rtkaratekid Does module->to(torch::kDouble) work? Agreed that we need to make the docs better, and we plan to work on it in following months.

@yf225 I really appreciate the reply!

module->to(torch::kDouble) actually didn’t work, but…
module.to(torch::kDouble) does.!

I’m just assuming that this is because the method .to() isn’t a pointer to a method/function?
Anyway, I’m glad this was an easy fix! I just wish I had known that torch::kDouble is an at::ScalarType.

Also, just wanted to say that I think the docs are quite good, just maybe not 100% complete. But the forums here more than make up for it. I’ve enjoyed the supportive community.

1 Like

Glad that I could be of help! Yes I think .to() instead of ->to() works because module is an object of a subclass of torch::nn::Module. If module were a shared_ptr to a torch::nn::Module subclass object, we would have to use ->to().

We will be making a lot of improvements to the docs (the goal is to achieve parity with Python API docs, and also easy mapping between Python API and C++ API). Please stay tuned and feel free to ask any questions here in the forum. :smiley:

1 Like