Hello everyone.
I want to create a transpose upsampling layer like this
class TransposeX2(nn.Sequential):
def __init__(self, in_channels, out_channels):
layers=[
nn.ConvTranspose2d(in_channels, out_channels, kernel_size=4, stride=2, padding=1, groups=out_channels),
nn.BatchNorm2d(out_channels),
nn.ReLU(),
]
super().__init__(*layers)
but i also want to use this layer multiple times. What should I do to avoid creating this layer on every upsample operation like
self.upsample1=TransposeX2(4,8)
self.upsample2=TransposeX2(8,8)
def forward(self, x):
a = self.upsample1(x)
b = self.upsample2(a)
I want to create upsample layer once and then use it everytime like this:
self.upsample=TransposeX2()
def forward(self, x):
a = self.upsample(x)
b = self.upsample(a)
Is it possible to create layer without setting input_channels, but make layer accept any input channels.
I’ve tried to do this as shown below, but i think thats not a proper way to do this, and also onnx convertion show errors, when I create layers like this
def forward(self, x):
ic = x.shape[1]
oc = (x.shape[1]//2)
x = nn.ConvTranspose2d(ic, oc, kernel_size=4, stride=2, padding=1, groups=oc).to(x.device)(x)
x = nn.BatchNorm2d(oc).to(x.device)(x)
x= nn.ReLU().to(x.device)(x)
return x
I’ve also tried functional api F.conv_transpose2d
, but it does wierd stuff (or I did smth wrong) and onnx conversion does not accept this too.
Thank you for help!