Pytorch: Different Input within Stacking Ensemble & Apex library!

I shortly will describe my project:
+2 input for 2 networks: image(3x256x256) & 20 features(1 tensor of 20 number)
+Stacking ensemble. more
This is my model:

class MetaMelanoma(nn.Module):
    def __init__(self,out_dim=9,n_meta_features=0,n_meta_dim=[512, 128]):
            super(MetaMelanoma,self).__init__()
            self.enet = timm.create_model('tf_efficientnet_b0_ns',pretrained=True)
            self.n_meta_features = n_meta_features
            self.dropouts = nn.ModuleList([
            nn.Dropout(0.5) for _ in range(5)])
            in_ch = self.enet.classifier.in_features
            self.meta = nn.Sequential(
                nn.Linear(n_meta_features, n_meta_dim[0]),
                nn.BatchNorm1d(n_meta_dim[0]),
                Swish_Module(),
                nn.Dropout(p=0.3),
                nn.Linear(n_meta_dim[0], n_meta_dim[1]),
                nn.BatchNorm1d(n_meta_dim[1]),
                Swish_Module(),
                nn.Linear(n_meta_dim[1],9)
            )
            self.myfc = nn.Linear(in_ch, out_dim)
            self.enet.classifier = nn.Identity()
    def extract(self, x):
        x = self.enet(x)
        return x
    def forward(self, x,x_meta):
        x = self.extract(x).squeeze(-1).squeeze(-1)
        if self.n_meta_features > 0:
            x_meta = self.meta(x_meta)
            x = torch.cat((x, x_meta), dim=1)
        for i, dropout in enumerate(self.dropouts):
            if i == 0:
                out = self.myfc(dropout(x))
            else:
                out += self.myfc(dropout(x))
        out /= len(self.dropouts)

        return out

In order to train the network, I need Apex Nvidia library for optimizer and loss function more
Problem: To install Apex, we need old cuda version satisfying pytorch version. However, I am using a sever from my school, I can’t change cuda version. So anyone have any solution for this, maybe the same library or protocol?

apex.amp is deprecated so use the native mixed-precision training utility via torch.cuda.amp. You can find the examples here.

1 Like

Thanks for sharing this info this is useful keep it up.

1 Like

Thank you very much, that was the solution for me. I searched for someone who had the same model, but unfortunately, I am having some problems there with torch.cuda.amp. Could you suggest some projects using different inputs in same the model?

What kind of issues are you seeing using torch.cuda.amp?
Since this util. is built into PyTorch, you don’t have to build any extensions manually using a locally installed CUDA toolkit as described in your first post.

torch.amp provides convenience methods for mixed precision, where some operations use the torch.float32 (float) datatype and other operations use lower precision floating point datatype (lower_precision_fp): torch.float16 (half) or torch.bfloat16.

Thanks a lot. I will try and let here know how it went

Did it worked for you? If yes, can you also help me in this regard. Thanks in advance.

I can help you in this regard, let me know if you need my assistance. I would be more than happy to help you out.