Why i got negative values for the feartures?

I got bad values for the validation loss as the first epoch has

ep001 : loss: 4.473 val_loss : 5.635

val_loss after that became increasing and that is cause overfitting , i think the problem with the way i extract the features from the images as I got negative values like that


1.06231242e-01, -4.80975091e-01, -2.29298934e-01, -2.22610545e+00,
       -1.70528257e+00,  1.00546718e+00, -3.37958717e+00,  8.62520158e-01,
        1.09218419e+00,  1.02676439e+00, -5.19405723e-01, -1.97264120e-01,
        6.29661739e-01,  8.84239256e-01, -2.18081331e+00,  1.23091125e+00,
       -1.20287977e-01,  7.21974909e-01,  2.16648507e+00,  4.63137746e-01,
        4.28042859e-01, -7.97591209e-01, -7.74370611e-01, -4.94653493e-01,
        1.33503842e+00,  5.22300005e-01, -2.49824524e+00, -4.03152406e-01,
       -1.78791010e+00, -1.03124011e+00,  8.80928874e-01, -1.29657283e-01,
       -2.61071831e-01, -3.11868429e-01,  5.31454265e-01,  4.07434464e-01,
       -3.46822053e-01,  7.70217240e-01,  1.25887111e-01, -1.67335427e+00, ....

the code is

class Identity(nn.Module):
def init(self):
super().init()
def forward(self, x):
return x
File = ‘file.pth.tar’
model = create_model(My_model’,pretrained=False)
model.load_state_dict(torch.load(File))
model.head = Identity()
model.eval()

I found this way in this forum to freeze the last layer as I don’t need to make a classification

Your code wouldn’t freeze the last layer but replace it with an Identity layer, so that the output activations of the penultimate layer will be returned.
If you don’t use an activation function such as nn.ReLU the activations could certainly be negative, so I’m unsure why negative values would be a concern.

1 Like

thanks a lot for this clarification another question if you please the model should have two methods which are forward_feature and forward . if I need to get the features from forward_feature using the Identity layer . I tried to implement like that

> class Identity(nn. Module):
>     def __init__(self):
>         super().__init__()
>     def forward_feature(self, x):
>         return x
> File = 'file.pth.tar'
> model = create_model(model_name,pretrained=False)
> model.load_state_dict(torch.load(File),strict=False)
> model.head = Identity()
> model.eval()

but i got

in forward
    out = self.head(x)
  File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 175, in _forward_unimplemented
    raise NotImplementedError
NotImplementedError

I don’t know why it calls the forward method and I used the forward_feature method

If you pass the input to an instance of nn.Module via:

output = model(input)

the __call__ method will be called, will register hooks (and potentially perform other bookkeeping), and will then call forward, which is why you need to define this method.
If you want to call a custom method, either call it in forward or directly via output = model.forward_feature(input). The latter case will of course skip the hook registration etc. so don’t rely on these features.

1 Like

yes i get it , thanks but why every time i run the code i got different numbers for tensors of the images i try to print the tensor result and get different values every time . i used model.eval() but there is no sense in the code

I don’t know what might be causing it, so feel free to post a minimal, executable code snippet reproducing the issue.

1 Like

i will but just to make sure from this option pretrained=False if i have .pth.tar file i should set pretrained=False and if i don’t i need to make it True

Your code is unfortunately not executable, as neither the file nor all methods are defined.

Do you mean you need the file of the model too ?

I would need a script, which I could copy/paste and execute locally to see and debug the issue.
For this, the model definition would be needed as well as (random) inputs, which would show that the model outputs different values in eval() mode.

Thanks but excuse me how can this be random and i load the pre-trained file with .pth.tar ? or i need to make other change in the code ?

I don’t know what’s causing the issue as I don’t have any code for debugging.

can i upload the file here , please?

Check if the file is even needed or of a randomly initialized model with static inputs could also reproduce the issue. In the latter case, just post the executable code here, please.

excuse me how can I check ?
I tried to remove the path of the file and found it run with a random number
if the model initialized the weights randomly , will it be a problem in training or using the model ?
what is the best " the model initialized randomly " or using the file ?

If I understand the issue correctly, you are using a model, call model.eval(), feed static input into it, and are seeing different results.
This is unexpected and we would need to have an executable code snippet in order to debug it.
To do so, I don’t believe that loading a specific state_dict or the real dataset is necessary and would claim that the issue should also be visible when just initializing the model and feeding it with a static (random) input.
Write a script, which creates a model instance, create a single random input tensor, call model.eval(), and run multiple forward passes. If the output differs by more than the known limited floating point precision, post the code here.