Replacing nn::Sequential

Hi all,
even I use C++ and libtorch, the question will target pytorch the same way. I changed the example GANomaly in two ways.
First I changed Conv2d to 3x3 filters und removed TransposeConv2d by Upsampling Nearest and Conv2d with 3x3. This gives good results.

//sq->push_back(nn::Conv2d(nn::Conv2dOptions(in_nc, out_nc, 4).stride(2).padding(1).bias(bias)));
    sq->push_back(nn::Conv2d(nn::Conv2dOptions(in_nc, out_nc, 3).stride(2).padding(1).padding_mode(torch::kReplicate).bias(bias)));
//sq->push_back(nn::ConvTranspose2d(nn::ConvTranspose2dOptions(in_nc, out_nc, 4).stride(2).padding(1).bias(bias)));
    sq->push_back(nn::Upsample(torch::nn::UpsampleOptions().mode(torch::kNearest).scale_factor(std::vector<double>({ 2,2 }))));
    sq->push_back(nn::Conv2d(nn::Conv2dOptions(in_nc, out_nc, 3).stride(1).padding(1).padding_mode(torch::kReplicate).bias(bias)));

Now I removed the sequence in EncoderImpl.
The constructor is now:

EncoderImpl::EncoderImpl(po::variables_map &vm)
    :conv1(nn::Conv2dOptions(1, NGEF, 3).stride(2).padding(1).padding_mode(torch::kReplicate).bias(false)),

    conv2(nn::Conv2dOptions(NGEF, NGEF * 2, 3).stride(2).padding(1).padding_mode(torch::kReplicate).bias(false)),
    bn2(torch::nn::BatchNorm2d(NGEF * 2)),

    conv3(nn::Conv2dOptions
....

And forward looks like:

torch::Tensor Conv1 = conv1->forward(x);
    torch::Tensor encLRelu1 = torch::leaky_relu(Conv1, 0.2);

    torch::Tensor Conv2 = conv2->forward(encLRelu1);
    torch::Tensor decBN2 = bn2->forward(Conv2);// bn2->forward(bnW2);// , bnW2, bnBiasW2, bnmeanW2, bnvarW2, true, 0.9, 0.001, true);
    torch::Tensor encLRelu2 = torch::leaky_relu
....

Set to eval or train is done for every Conv2d and BatchNorm2d. register_module too.
This works fine, converges, and gives good results.

BUT!!

Doing this for DecoderImpl doesn’t work. Constructor:

DecoderImpl(po::variables_map &vm)
     : conv1(nn::Conv2dOptions(NGEF * 8, NGEF * 8, 3).stride(1).padding(1).padding_mode(torch::kReplicate).bias(false)),
    bn1(torch::nn::BatchNorm2d(NGEF * 8)),

    conv2(nn::Conv2dOptions(NGEF * 8, NGEF * 8, 3).stride(1).padding(1).padding_mode(torch::kReplicate).bias(false)),
    bn2(torch::nn::BatchNorm2d(NGEF * 8)),
...

Forward:

int size1 = z.sizes()[0];
    int size2 = z.sizes()[1];
    int width = z.sizes()[2];
    int height = z.sizes()[3];

    torch::Tensor up1 = torch::upsample_nearest2d(z, c10::IntArrayRef{ (int)width * 2,(int)height * 2 });
    torch::Tensor decConv1 = conv1->forward(up1);
    torch::Tensor decBN1 = bn1->forward(decConv1);
    torch::Tensor decRelu1 = torch::relu(decConv1);
....

Does anybody replaced nn::Sequential and faced problems? Any code running inside the sequence which is not easy visible?

Many thanks for your help.

I got it.

sq->push_back(nn::Upsample(torch::nn::UpsampleOptions().mode(torch::kNearest).scale_factor(std::vector<double>({ 2,2 }))));

was replaced in forward by

torch::upsample_nearest2d(decRelu2, c10::IntArrayRef{ (int)width * 8,(int)height * 8 });

That is wrong, made a class variable like for conv and batchnorm, initialized it in the constructor:

 : up1(torch::nn::UpsampleOptions().mode(torch::kNearest).scale_factor(std::vector<double>({ 2,2 }))),
    conv1(nn::Conv2dOptions(NGEF * 8, NGEF * 8, 3).stride(1).padding(1).padding_mode(torch::kReplicate).bias(false)),
    bn1(torch::nn::BatchNorm2d(NGEF * 8)),

Added it to eval and train and it works.

Hope it helps.