I have a pretrained model:
ModuleList(
(0): Sequential(
(0): Conv1d(1, 512, kernel_size=(10,), stride=(5,), bias=False)
(1): Dropout(p=0.0, inplace=False)
(2): Fp32GroupNorm(1, 512, eps=1e-05, affine=True)
(3): ReLU()
)
(1): Sequential(
(0): Conv1d(512, 512, kernel_size=(3,), stride=(2,), bias=False)
(1): Dropout(p=0.0, inplace=False)
(2): Fp32GroupNorm(1, 512, eps=1e-05, affine=True)
(3): ReLU()
)
(2): Sequential(
(0): Conv1d(512, 512, kernel_size=(3,), stride=(2,), bias=False)
(1): Dropout(p=0.0, inplace=False)
(2): Fp32GroupNorm(1, 512, eps=1e-05, affine=True)
(3): ReLU()
)
(3): Sequential(
(0): Conv1d(512, 512, kernel_size=(3,), stride=(2,), bias=False)
(1): Dropout(p=0.0, inplace=False)
(2): Fp32GroupNorm(1, 512, eps=1e-05, affine=True)
(3): ReLU()
)
(4): Sequential(
(0): Conv1d(512, 512, kernel_size=(3,), stride=(2,), bias=False)
(1): Dropout(p=0.0, inplace=False)
(2): Fp32GroupNorm(1, 512, eps=1e-05, affine=True)
(3): ReLU()
)
(5): Sequential(
(0): Conv1d(512, 512, kernel_size=(2,), stride=(2,), bias=False)
(1): Dropout(p=0.0, inplace=False)
(2): Fp32GroupNorm(1, 512, eps=1e-05, affine=True)
(3): ReLU()
)
(6): Sequential(
(0): Conv1d(512, 512, kernel_size=(2,), stride=(2,), bias=False)
(1): Dropout(p=0.0, inplace=False)
(2): Fp32GroupNorm(1, 512, eps=1e-05, affine=True)
(3): ReLU()
)
)
Can I use the same weights for inverting with ConvTranspose1d
? Or do I need to create an inverse network and retrain it?