IntList sizes = net->named_parameters()["layer"].sizes();
torch::Tensor tensor = torch::from_blob(data.data(), sizes); // where data is an std::vector<float>
net->named_parameters()["layer"].set_data(tensor);

terminate called after throwing an instance of 'c10::Error'
what(): !is_variable() ASSERT FAILED at /home/px046/dev/pytorch/c10/core/TensorImpl.h:758, please report a bug to PyTorch. (set_sizes_and_strides at /home/px046/dev/pytorch/c10/core/TensorImpl.h:758)

Hm, the memcpy approach I wrote above only works for small tensors. It works for 5x5 tensors but on 64x64 ones it stops working part-way through the tensor.

EDIT:
Okay, I found the proper way to do it. You need a NoGradGuard and Tensor::copy():

what I dont like about your solution is that it has the potential to break other peoples code. Your solution would at least require to put it in parenthesis, e.g.

Thanks for the suggested code. I just have a quick question, can from_blob create a constant tensor by taking a constant array as input? For example, given a constant reference of a float array data_array, and try to do the following:
``’
torch::Tensor tensor = torch::from_blob(data_array, tensor_size);

Is there any way to make from_blob (or any other alternative method) works in this case? Thanks!