I think the cleanest approach would be to write a custom nn.Module and pass these layers to this model or to re-wrap them into e.g. a new nn.Sequential container. Note that the latter approach would need some additional changes as the functional API calls from the original model’s forward method would be missing. In particular I expect a flattening operation in the forward was used (e.g. via x = x.view(x.size(0), -1)) between the MaxPool2d layer and the first Linear layer.
Your approach should also work assuming that children() returns all modules, which are indeed applied sequentially in your original forward method.
As previously mentioned, at least the flattening operation seems to be missing, so I guess this is what might be causing the error you are seeing.