The problem CNN

When I tested with Softmax the result of fully-Connected and softmax :

with torch.no_grad():    
        output = model((val_x.float()),(val_x1.float()))
        val_y = val_y.to(device)    
        softmax = torch.exp(output).cpu() 
        print("softmax",softmax)      
        prob = list(softmax.numpy())
        print("softmax de test", prob)
        predictions = np.argmax(prob, axis=1)
        print("prediction test", predictions)
       print('Validation accuracy test: {:.4f}%'.format(float(accuracy_score(val_y, predictions)) * 100))

image

but the values of softmax is not between[0,1] where is the problem?? the accuracy is 70,52%

Based on the output of “Fully-Connected” I would guess your model returns logits, so you should call F.softmax on the output to get the probabilities.

@ptrblck it’s the same

with torch.no_grad():    
        output = model((val_x.float()),(val_x1.float()))
        val_y = val_y.to(device)    
        softmax = torch.exp(output).cpu() 
        softmax= F.softmax(softmax, dim = 1)  
        print("softmax",softmax)      
        prob = list(softmax.numpy())
        print("softmax de test", prob)
        predictions = np.argmax(prob, axis=1)
       print('Validation accuracy test: {:.4f}%'.format(float(accuracy_score(val_y, predictions)) * 100))

the values of softmax is not between [0,1]

How did you check the min. and max. values of the softmax?
If you have used:

softmax= F.softmax(softmax, dim = 1) 
print(softmax.min(), softmax.max())

and the values are still out-of-bounds, please post an executable code snippet to reproduce it.

@ptrblck why the results of softmax are in two columns?
like that :
softmax de test [array([0.35895655, 0.6410435 ], dtype=float32), array([0.5, 0.5], dtype=float32), array([0.38063172, 0.61936826], dtype=float32), array([0.30245292, 0.6975471 ], dtype=float32), array([0.00381802, 0.99618196], dtype=float32),…, array([0.5064734 , 0.49352652], dtype=float32).

The softmax operation normalizes the specified dim so that its sum equals to 1. and the results can be interpreted as a probability.
In your case the two values correspond to the probability for class0 and class1 and their sum is 1.
Take a look at the Softmax Wikipedia article (or any other resource) for more information.

@ptrblck thanks for your reply.

just an information, when I tested the network with 8 convolutional layers, and 2 pooling.

the results are in this form :

image
I haven’t finished all the values
so i can’t display the results as an image??

I think visualizing tensors and arrays was already discussed in this thread.

I don’t know what shape the tensor in the current screenshot has, but as already described you will be able to visualize tensors using plt.imshow as long as they have a valid image shape.
I’m also unsure why the values are again negative, but assume you are not using the softmax operation anymore.

@ptrblck the values that I transmitted to you, are just the results of the convolution layers.
I want to substruct the results of the convolutional layers of x and x1.

 out = self.cnn1(x)
        out = self.batchnorm1(out)
        out = self.relu(out)
        out = self.maxpool1(out)
        out = self.cnn2(out)
        out = self.batchnorm2(out)
        out = self.relu(out)
        out = self.maxpool2(out)
        out = self.cnn3(out)
        out = self.batchnorm3(out)
        out = self.relu(out)
        out = self.cnn4(out)
        out = self.batchnorm4(out)
        out = self.relu(out)
        out = self.cnn5(out)
        out = self.batchnorm5(out)
        out = self.relu(out)
       
        out1 = self.cnn1(x2)
        out1 = self.batchnorm1(out1)
        out1 = self.relu(out1)
        out1 = self.maxpool1(out1)
        out1 = self.cnn2(out1)
        out1 = self.batchnorm2(out1)
        out1 = self.relu(out1)
        out1 = self.maxpool2(out1)
        out1 = self.cnn3(out1)
        out1 = self.batchnorm3(out1)
        out1 = self.relu(out1)
        out1 = self.cnn4(out1)
        out1 = self.batchnorm4(out1)
        out1 = self.relu(out1)
        out1 = self.cnn5(out1)
        out1 = self.batchnorm5(out1)
        out1 = self.relu(out1)
     
        out2 = out - out1
       print(out2)