Output of pretrained VGG19 net is giving me a different value each time given the same input

I have some code that loads in VGG and some pre loaded image:

vgg = models.vgg19(pretrained=True)
inp = Variable(img_arr)
vgg_output = vgg(inp) ; vgg_output

Here are 3 outputs:

I was just wondering if this is normal behavioral?

VGG uses dropout, which is in effect when nn.Module.training is True, which is true until you call nn.Module.eval(), like so:

In [1]: import torch

In [2]: from torch.autograd import Variable

In [3]: from torchvision import models

In [4]: vgg = models.vgg19(pretrained=True)
img_arr = torch.randn(1, 3, 255, 255)

In [5]: img_arr = torch.randn(1, 3, 255, 255)

In [6]: inp = Variable(img_arr)

In [7]: vgg_output = vgg(inp); vgg_output
Out[7]: 
Variable containing:
 0.3886  1.7391  0.6889  ...  -0.7818 -0.6299  1.4617
[torch.FloatTensor of size 1x1000]

In [8]: vgg_output = vgg(inp); vgg_output
Out[8]: 
Variable containing:
 0.2152  0.8930 -0.2694  ...  -1.3448 -1.0411  2.2298
[torch.FloatTensor of size 1x1000]

In [9]: vgg = vgg.eval()

In [10]: vgg_output = vgg(inp); vgg_output
Out[10]: 
Variable containing:
-0.1476  1.2013  0.4434  ...  -1.2543 -0.6353  1.5407
[torch.FloatTensor of size 1x1000]

In [11]: vgg_output = vgg(inp); vgg_output
Out[11]: 
Variable containing:
-0.1476  1.2013  0.4434  ...  -1.2543 -0.6353  1.5407
[torch.FloatTensor of size 1x1000]
4 Likes

Ahh such an obvious oversight thanks for your reply!