Is it possible to return only the free parameters from the model. For a Linear Model, we will have (input_features+1)*out_features number of parameters in the model. For binary classification with 10 covariates, we will have 22 parameters. However, only half of them are free parameters. The loss function is computed from the whole set of parameters, if I try to do something like

**torch.autograd.grad(likelihood,free_parameters, create_graph=True,only_inputs=True)**

It will give me RuntimeError: differentiated input is unreachable.

So far I try to compute wrt the whole set of parameters and then try to pick those related to the free parameters.

Are there any better way of doing this? i

Hi,

Pytorch does not have a notion of “free parameters” (I am not 100% sure what you mean by that).

If you’re saying that the parameters of a layer are actually a simple transformation of a smaller set of parameters and that you want to optimize this smaller set of parameters, then you should write that explicity as a layer and it will work as you want:

```
class MyLayer(nn.Module):
def __init__(self):
super(MyLayer, self).__init__()
# The parameters of your layer are just the "free parameters"
self.free_params = nn.Parameters(small_params_size)
def forward(self, input):
# Expand your small parameters set into the full parameters
full_params = expand_params(self.free_params)
# Do the regular forward function
output = layer_function(full_params, input)
return output
```

If you use this layer, then all the regular pytorch methods will directly optimize the small set of parameters.