Extracting each neuron as variable from any layer in Neural network model

re: Can we extract each neuron as variable in any layer of NN model, and apply optimization constriants in each neuron?

But I want to add this constraints as layer inside the NN model. So need some functions in pytorch that can give me acces to each neurons as variables. Is it possible?

When you say ‘neurons’, do you mean the parameters of a model or output of a layer within a model?

If you want to get the parameters of a model you can do so via,

net = model()
for name, param in net.named_parameters():
  print(name, param) #print name and weights of model

For the output of a layer you can use a forward-pre hook to get the intermediate activation of a model, and just attached that to all nn.Modules via,

net = model()
for name, module in net.named_modules():
  register_module_forward_pre_hook(module)
  print("registered hook on: ", name)

If this is purely a optimisation related issue, then in my opinion you’re better off doing all of this within the optimizer directly. In a similar way to how weight decay is implemented in optimizers.

I want to apply convex optimization on output of the linear layer and add this as an cutsom layer after linear layer on the same NN model

If that’s the case then you’ll want to use a forward hook (not forward pre hook) to cache the output of a linear layer. You can use the same code as above but just change from a forward pre hook to forward hook.

Then you can cache the intermediate values within a dictionary (using the layer as a key) and perform constrained optimization in the way you need.