Pytorch set parameters (weight and bias) of layers

I have constructed a network using C++ API. I want to assign my network layers to specific weight and bias that I get from pretrained tensorflow model. How can I achieve this? One of my layer declaration is as follows:

    torch::nn::Sequential conv1_1{
        torch::nn::Conv2d(torch::nn::Conv2dOptions(3,64,3).stride(1).padding(1).bias(true)),
        torch::nn::ReLU()
    };

For now I am assigning parameters randomly as follows:

    torch::NoGradGuard no_grad;
    for (auto& p : this->parameters()){
        p.uniform_(-1,1);
    }

How can I assign a tensor as weight to a layer?

I know that model->conv1_1->parameters()[0] represents the weights and model->conv1_1->parameters()[1] represents the bias for conv1_1. But I can’t set weights with:

torch::Tensor conv_wght = torch::zeros({64, 3, 3, 3},torch::TensorOptions().dtype(torch::kFloat32).device(torch::kCUDA, 0));
model->conv1_1->parameters()[0] = conv_wght;

Anyone can give me a clue?

for (const auto& pair : this->conv1_1->parameters()) {
        pair.copy_(in); // in is torch::Tensor type
    }

copy_() solves my problem. Just be careful about in tensor size since this loop will also try to update bias.