Freeze layers in torch C++

how can I freeze subset of layers of NN from training in C++? OR is there a way to only pass subset (some parameter group)of model params to the optimizer for gradient updates in C++?

Thanks in advance.

You could be able to iterate the parameters and set their requires_grad attribute to False.
Something like this would work where module is the desired layer:

for (auto& parameter : module->parameters()) {
  parameter.requires_grad_(false);
}