Expand Output of Network Chained Neural Nets

I have a project in which I am taking the output of one network that I am training, and feeding it into a pretrained network in evaluate mode and optimizing the output of the second network by training the first network. It’s amazing that PyTorch’s autograd allows me to do this, it really is a flexible platform. One of the problems I am having however is that the output of the first network is a 1x10 tensor like [1,2,3,4,5,6,7,8,9,10]. The input I would like to feed into the second network is a 1x10x13x26 tensor where there are 10 13x26 “images” each filled with of the values above. Of course I could use a for loop iterate through the 1x10 tensor and create a new tensor input based upon this but this wouldn’t allow autograd to function. How can I expand the output from my first network to match the required input from the second network while maintaining autograd and without changing the architecture of either network?

see the usage of functions unsqueeze, expand and repeat

1 Like