Must use forward function in nn.Module?

Sometimes I need to use different net in one model. For example, somtimes I need the value from a network call ‘critic’, sometimes I need the value from a network called ‘actor’, and sometimes I need both. My question is, can I define some function to get the critic value without using the forward function? Will this method influcence the backward propogation?

(an example)

class Model(nn.Module):
def init(self,obs_dim, action_num, device):
super(Model, self).init()
self.actor = nn.Sequential(
nn.Linear(obs_dim, action_num)
)
self.critic = nn.Sequential(
nn.Linear(obs_dim, 1)
)

def get_value(self, inputs):
    return self.critic(inputs)

def get_action(self, inputs):
    return self.actor(inputs)

def get_both(self, inputs):
      return self.actor(inputs), self.critic(inputs)

I don’t understand the question at all. You want to use subnetworks to get some values without affecting backprop?

Whatever you run helps to build a computational graph which will be further backpropagated.
If you need to use subnetworks to obtain values not involved in backprop you can use the context manager with torch.no_grad()

The only thing special about the forward method is that you can invoke it by calling the model (i.e. m(*inputs) will call m.forward(*inputs)) and that you get hook processing etc.

As such, there is no reason why you would not be able to provide other entry points - I often have a separate predict method for sequence-generating models or sample for density estimation modules.

Whether it is a good idea or not likely depends on your view how closely they are coupled. I’m not entirely sure that it is the case here. On the other hand frameworks like fastai like to stick anything and everything (models, loss functions, learning rate schedule, optimizer, callbacks…) related into one class - but not a nn.Module one.

Best regards

Thomas

Best regards

Thomas

2 Likes

Thank u for your patient reply, Thomas. I think I got it.

Best wishes

Linda

Thank u for your reply. Well, my question is if I get value from subnetworks with self-defined function instead of forward function in nn.module, will it affect backprop process, for example, the subnetwork won’t backprop correctly. But it turns out that it’s no necessary to use the forward function.

@tom Regarding your reply, even if you entry a module not using forward function, at some point you will be calling a nn.Module at least for built-in nn.Modules, right?

As far as I understand (autograd) to perform inference it would be necessary to disable autograd module, don’t you think so?

autograd seems completely orthogonal to the headline issue (whether forward is special), but yes, for inference it probably makes sense to use with torch.no_grad(): or for p in model.parameters(): p.requires_grad_(False).

Best regards

Thomas