I have a NN, you can see the picture below. I have small number of data (66). I split them into (44 train, 22 test). I’ve used binary_cross_entropy, After training the AUC is 0.5. I’ve tried other loss functions, but nothing changed.
The dataset is tiny and I don’t know which layers your model contains.
Could you post the model definition here so that we could check for obvious issues?
here is the model:
n_feature = 9
n_genes = 691
def init(self, n_feature, n_genes, n_output):
self.nn = torch.nn.Sequential(torch.nn.Linear(n_genes, 1))
self.final = torch.nn.Sequential(torch.nn.Linear(n_feature, 1))
self.predict1 = torch.nn.Sigmoid()
def forward(self, x):
x = x
o1 = self.nn(x)
o = self.final(o1.squeeze())
w = self.predict1(o)
My case is binary class. What kind of loss function is appropriate for the task?
Your model won’t work, as you are hitting a shape mismatch between
out_features of one linear layer are used as the
in_features of the next one (in the common use case), so remove the
squeeze in the
forward and use:
self.nn = torch.nn.Sequential(torch.nn.Linear(n_genes, n_feature))