Output of sigmoid is out of [0..1]

Hi, I have a neural network the last last layer of which is Sigmoid, but the output of my model(input) contains values that are out from [0…1] range. Sorry, I am new in pytorch so I am not familiar with all pipelines. Could you please tell me, should I apply the sigmoid separately? If so, why it is shown in the neural network architecture?

That sounds rather like a bug. Could you post your model architecture so that we could take a look at it?
Also, which PyTorch, CUDA, cudnn version are you using and how did you build/install it?

I use the Unet3D model from pytorch-unet3d repo. I am training/predict on the machine with multiple gpus, but use only one GPU device. For training, I use the code provided by the repository but use my own dataloaders. For the prediction I use the following code

device = torch.device("cuda:3" if torch.cuda.is_available() else "cpu")
config['device'] = device
model = get_model(config)
utils.load_checkpoint(PATH, model)

with torch.no_grad():
    model.to(device)
    model.eval()
    arr = arr.to(device)
    pred = model(arr)
    pred = pred.data.cpu().numpy()
    print(np.unique(pred),pred.shape)

result of print(model)

....
....
)
  (final_conv): Conv3d(32, 1, kernel_size=(1, 1, 1), stride=(1, 1, 1))
  (final_activation): Sigmoid()
)

Versions
Pytorch=1.2, Cuda compilation tools, release 10.0, V10.0.130
pytorch was installed via pip install torch==1.2.0 torchvision==0.4.0
I don’t know how to check cudnn version, could you please help me with that?

Could you update to the latest PyTorch release with CUDA10.2 or CUDA11.0 and retry the code?
PyTorch 1.2.0 is quite old by now and this might have been an already fixed issue.

Are there any ways to check this without updating to the latest version? The thing is, I am renting a GPU, so the environment was set up for me, it can take a while to install another version of cuda to check this, but if you think that it’s better to use the latest release because there can be some serious bugs in 1.2.0, I would update it asap.

You don’t need to install a local CUDA toolkit, if you are not planning to build PyTorch from source or any custom CUDA extensions, since the conda binaries and pip wheels ship with their own CUDA runtimes.

Thanks, I updated the pytorch to 1.7, the result of the prediction is still out of [-1,1] print(np.unique(pred))

[-38.52738  -38.197113 -37.780617 ...  10.451873  10.506324  10.691366]

Check of the pytorch version (was installed via pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html)

torch.__version__

Out[2]:

'1.7.1+cu101'

Thanks for the update.
Have you set the self.testing argument to True, as the last activation wouldn’t be used otherwise as seen here?

1 Like

Thanks for the help, with model.testing=True the output is correct. Sorry for the confusion, should’ve checked it myself.