Can using torch.clamp change the output of my model?

Hi,
Can torch.clamp() which I assume is some form of ReLU change the output of my NN classification?

I am running a celebrity face image on a pretrained VGG Face model. I set the model in eval mode and make a forward pass.

        image, label = image.to(device), label.to(device)  
        preds = F.softmax(vgg_model_instance(image.view(1, 3, 224, 224)), dim=1)
        values, indices = preds.max(-1)

I get a results. 87% accuracy.

Then i apply clamp on the input image.

        image, label = image.to(device), label.to(device) 
        **image = torch.clamp(image, 0, 1)**
        preds = F.softmax(vgg_model_instance(image.view(1, 3, 224, 224)), dim=1)
        values, indices = preds.max(-1)

The accuracy crashes like anything. Any idea what is going on.

With all other setting constant I have tried ranges (-1,1) and (0,1) for clamp. I also tried running the model through softmax, max and logsoftmax.

The reason I am trying this experiment for testing a FGSM attack. While I am not sure if clamping the tensor is necessary for the attack, all examples I follow seem to do it. Would really appreciate if one could explain why this is done too.

If you’ve normalized the input data, it’ll have a zero mean and unit variance.
Clamping it to [0, 1] might delete a lot of important features, which could explain the bad accuracy.