How to achieve mutual conversion between at::tensor and tensor::tensor?

How to achieve mutual conversion between at::tensor and tensor::tensor? At::tensor print shows [cputype], tensor::tensor print shows variable[cputype].

So this doesn’t answer your question, but it’ll likely solve your problem:
Always work with torch::tensor. If you create new tensors, pass the .options() of an existing tensor to make it look right to everything else.
Then you should not encounter the need to convert tensors.
The only place where you currently need to care about subtle things are autograd::Functions, where you use Variables, but that will go away eventually.

Best regards

Thomas

I’ve got an at::tensor variable and a tensor::tensor, and now I want to use these two tensors for subsequent calculations, but the types are different. Is it necessary to create an at::tensor variable with tensor::tensor data. thank you very much for your help.

So what I am trying to suggest is to share a bit about where you got the at::tensor from, because likely that can be helped.

Best regards

Thomas