Is it correct to use module sub-Modules members outside of the class?

Suppose I have a Module:

Class A(nn.Module):
...
def __init__(...)
    self.word_embedding = nn.Embedding(size,hidden)
...

Can I use self.word_embedding outside of the class A forward function? Will this cause any problems with gradients?

For instance,


if __name__ == '__main__':
   ...
X = torch.ones(...)
model = A(...)
output = model(...)
X = model.word_embedding(X)
....

I know I can extract the embedding to a different module, but I want to keep A intact, with most of the job done in A’s forward.

Thanks!

Hi,

No it will not cause any problem. The gradients will correspond to all usage you make of it.
Be careful though not to add it twice to your optimizer. Not sure what would happen in that case :slight_smile:

1 Like

What if I want to use that embedding in another class. I was thinking to do something like:

Class A(nn.Module):
...
def __init__(...)
    self.word_embedding = nn.Embedding(size,hidden)
...
def forward()
...

and there is another class

Class B(nn.Module):
...
def __init__(...)
...
def forward(self,word_embedding_layer_of_class_A)
...

but since I have defined my moduls as:

a = A()
b = B()
a_optim = optim(a.parameters())
b_optim = optim(b.parameters())

I had some doubts if it produces some problems with gradients.

You can do it that way but B.parameters() won’t contain the embedding. Is that what you want?

1 Like

Yes, I want to share the embedding in the class A with B. So I think in this case it is sufficient to run the backward once. Therefore I think I need only to contain embedding either in class A or Class B. Am I right?

Yes running the backward once will be enough.