Initializing a Tensor from a Tensor C++

I have no idea if this is best practice or not, but I am using Pytorch as an optimization library rather than a Neural Network Library. I have a function in python where I take a previously defined pytorch tensor

R_wrist = torch.eye(3)

and I need to make a new tensor from it to keep track of its gradients, so in a separate function I create a new tensor based on the one I built

R = torch.tensor(R, requires_grad=True)

This works for what I need, but now I am trying to convert this code to C++, unfortunatley I cannot convert a Tensor into a Tensor there either through torch::tensor() or torch::from_blob()

How would I go about solving this problem. Should I even be creating a new tensor that requires gradient? should I initialize the original tensor to require gradient and avoid it all together? If not should i be converting the first tensor into an array and passing that array into torch::from_blob()?

In the Python frontend, using torch.tensor() to copy-construct a Tensor from a Tensor is actually deprecated, and the recommended way is sourceTensor.clone().detach().requires_grad_(True).

In C++ frontend, currently the best way to achieve the same functionality is:

#include <torch/csrc/autograd/variable.h>

torch::Tensor a = torch::eye(3);
torch::Tensor b = torch::autograd::Variable(a.clone()).detach();
b.set_requires_grad(true);

We are actively working on improving the API to make a.clone.detach().requires_grad_(true) work soon.

2 Likes