I have no idea how pytorch’s autograd and constructing computational graph work,
thus I’m not sure whether it is fine to write a code like below:
def forward(self, x):
x = self.conv(x)
return x
I think in order to track the gradient of each tensor, the name of output of convolution layer should be different from “x”, but I found many code actually dump their tensor name.
Is it still possible to backpropagate the gradient properly till the front part of the model?
I mean, what if the input ‘x’ of above forward funciton actually came from another trainable network or something
If it is, how is that possible?