The equivalent of nn.Parameter in LibTorch

I am trying to port a python PyTorch model to LibTorch.

In python the line of code is:
nn.Parameter(A) where A is a torch.tensor with requires_grad=True.

What would be the equivalent of this for a torch::Tensor in C++ ?
The autocomplete in my editor gives options for ParameterDict, ParameterList,
ParameterDictImpl, ParamaterListImpl, but no Parameter.

To register a parameter (or tensor which requires gradients) to a module, you could use:

m.register_parameter("weight", torch::ones({20, 1, 5, 5}), false);

in libtorch. I don’t know if there is a proper parameter class and have seen the posted approach used in modules.

1 Like

Is that third argument, the bool, related to requires_grad? Thanks for the help!

Yes, I think in the module class it should be the requires_grad attribute as seen here:

Tensor& Module::register_parameter(
    std::string name,
    Tensor tensor,
    bool requires_grad) {
  TORCH_CHECK(!name.empty(), "Parameter name must not be empty");
  TORCH_CHECK(
      name.find('.') == std::string::npos,
      "Parameter name must not contain a dot (got '",
      name,
      "')");
  if (!tensor.defined()) {
    if (requires_grad) {
      TORCH_WARN(
        "An undefined tensor cannot require grad. ",
        "Ignoring the `requires_grad=true` function parameter.");
    }
  } else {
    tensor.set_requires_grad(requires_grad);
  }
  return parameters_.insert(std::move(name), std::move(tensor));
}

while the jit API seems to use it as is_buffer:

  void register_parameter(
      const std::string& name,
      at::Tensor v,
      bool is_buffer) {
    type()->addOrCheckAttribute(name, TensorType::get(), !is_buffer, is_buffer);
    _ivalue()->setAttr(name, std::move(v));
  }