Consider one particular layer of my outer model. Basically this layer is defined as a custom function which returns a Sequential() which consists of 3 blocks. These blocks are defined as separate classes, and have their own layers and forward() functions.
I want the output of a layer in one of these blocks. I can make the block return multiple values (by modifying it’s forward()), but then how can I make the original model return these values without changing them?
class Model(nn.Module):
def __init__(self):
...
self.layer = custom_layer(params)
...
def custom_layer(self, params):
return nn.Sequential(Block(a, b, c), Block(a, b, c), Block(a, b, c))
def forward(self, x):
...
class Block(nn.Module):
def __init__(self, a, b, c):
...
self.conv1 = ...
self.relu1 = ...
def forward(self, x):
...
out = self.conv1(out)
out1 = out
out = self.relu1(out)
return out, out1
Basically I want out1, out, and a few other layers in the forward() of the block.
Create a tensor in global scope whose shape is same a out1. Keep modifying out1, using inplace operations that operate on Model. This way you won’t need to change your forward function’s io.
Could you specify exactly what you mean by “part of the graph”? I will not modify out1 after finding it as shown above, if that is the concern.
Yeah that’s what I meant.
You can just create another global called block2out1 and write the values into that tensor
Most torch operations have an argument out which specify where the output answer should be written to
In your case above, you could write out1.data = out.data and that copies out’s values