Accessing state_dicts in c++

Hi,

In c++ how can I achieve the following (from the reinforcement learning tutorial):

`target_net_state_dict = [target_net.state_dict]
policy_net_state_dict = [policy_net.state_dict]

for key in policy_net_state_dict:
target_net_state_dict[key] = policy_net_state_dict[key]TAU + target_net_state_dict[key](1-TAU)`

I’ve searched the forums but it almost seems like this isn’t supported?

Thanks

Hi guys,

Can anyone suggest a way to interpolate the weights of a network in c++ directly, such as this example in Python?

Thanks

Asking again - maybe @ptrblck

I haven’t tried this code but something like this could work:

const torch::OrderedDict<std::string, at::Tensor>& model_params = model->named_parameters();
  std::vector<std::string> param_names;
  for (auto const& w : model_params) {
    param_names.push_back(w.key());
  }

  torch::NoGradGuard no_grad;
  for (auto const& w : weights) {
      std::string name = w.key().toStringRef();
      at::Tensor param = w.value().toTensor();

      if (std::find(param_names.begin(), param_names.end(), name) != param_names.end()){
        at::Tensor paramA = model_params.find(name);
        model_params.find(name)->copy_(paramA * TAU + param * (1 - TAU));
      } else {
        std::cout << name << " does not exist among model parameters." << std::endl;
      };

Thank you @ptrblck

Where is “weights” coming from here?