Combining Trained Models in PyTorch

The code looks generally OK, but I wouldn’t recommend to create new models via passing the child modules to nn.Sequential.
Using this approach you would call each submodule in a sequential way and would therefore lose all functional calls, which were used in the original forward.
E.g. for DenseNet you would lose these calls.

If you just want to remove the last layer, replace it with nn.Identity.
On the other hand, if you want to manipulate more layers or the forward pass in general, I would recommend to create a custom model, derive from the corresponding torchvision model, and change the forward appropriately.