Manually set Variable data in C++


(Isaac Poulton) #1

I’m trying to do the equivalent of layer.weights.data = tensor using the PyTorch C++ frontend.

I’ve tried a few things:

*module.named_parameters()["layer"].data<float>() = *vector.data();

This segmentation faults.

module.named_parameters()["layer"].set_data(torch::from_blob(vector.data(), tensor_size));

This gives !is_variable() ASSERT FAILED.

module.named_parameters()["layer"] = torch::from_blob(vector.data(), tensor_size);

This apparently doesn’t do anything. The weights afterwards don’t change.

Is there a way to do this in C++?


(Isaac Poulton) #2

It’s a bit awkward, but I found you can do it with memcpy.

memcpy(module.named_parameters()["layer"].data<float>(), vector.data(), element_count);

(Martin Huber) #3

Hey @Omegastick,

how about doing this instead

IntList sizes = net->named_parameters()["layer"].sizes();
torch::Tensor tensor = torch::from_blob(data.data(), sizes); // where data is an std::vector<float>

net->named_parameters()["layer"].set_data(tensor);

(Isaac Poulton) #4

That gives:

terminate called after throwing an instance of 'c10::Error'
  what():  !is_variable() ASSERT FAILED at /home/px046/dev/pytorch/c10/core/TensorImpl.h:758, please report a bug to PyTorch. (set_sizes_and_strides at /home/px046/dev/pytorch/c10/core/TensorImpl.h:758)

(Isaac Poulton) #5

I found an error in my original memcpy code. You need to multiply the number of elements by the size of the data type:

memcpy(module.named_parameters()["layer"].data<float>(), vector.data(), element_count * sizeof(float));

(Isaac Poulton) #6

Hm, the memcpy approach I wrote above only works for small tensors. It works for 5x5 tensors but on 64x64 ones it stops working part-way through the tensor.

EDIT:
Okay, I found the proper way to do it. You need a NoGradGuard and Tensor::copy():

NoGradGuard guard;
module.named_parameters()["layer"].copy_(torch::from_blob(vector.data(), tensor_size));

(Martin Huber) #7

take care that your guard is only locally defined


(Martin Huber) #8

Hey @Omegastick,

what I dont like about your solution is that it has the potential to break other peoples code. Your solution would at least require to put it in parenthesis, e.g.

{
    NoGradGuard guard;
    module.named_parameters()["layer"].copy_(torch::from_blob(vector.data(), tensor_size));
}

You could either do this or this, as mentioned above

torch::Tensor tensor = torch::from_blob(vector.data(), tensor_size);
net->named_parameters()["layer"].set_data(tensor);

(Haihong Qin) #9

that works for me! thanks!