Hi,
maybe I’m missing sth obvious but there does not seem to be an “append()” method for nn.Sequential, cos it would be handy when the layers of the sequential could not be added at once.
Or it would be equivalent if I first added all the layer I need into a ModuleList then there’s a method for directly converting all the modules in a ModuleList to a Sequential.
Thanks in advance!
You can first construct a python list of nn.Modules
and unpack it into a nn.Sequential
import torch.nn as nn
modules = []
modules.append(nn.Linear(10, 10))
modules.append(nn.Linear(10, 10))
sequential = nn.Sequential(*modules)
Ah that’s neat, thanks
Thanks a lot! That’s really helpful!
Thanks, worked like a charm.
Hi,
I’ve had a similar issue and this discussion was very helpful.
I do have another question - I would like to:
- extract a copy of the current state of the tensor to a “skip connection” list in a U-shaped cnn.
- adapt the tensor’s shape (that is the input) as I go (can not determine this ahead because I can’t know the number of convolutions ahead)
how can I do this while I’m adding modules to the list? I would like to make the list as modular as I can and can not skip the “append” stage in a for loop.
Thank you!
For people who dont want to update their CUDA or Pytorch to the version(e.g. v1.13.0) that Sequential have append() method, you can just commend you local Sequential code, and copy the “pytorch/torch/nn/modules/container.py at v1.13.0 · pytorch/pytorch · GitHub” code to your local Sequential code position, and then it can use append() method.
But remember, do not copy Sequential code from the latest pytorch version,because it need python v11 to support some new feature.