Model prediction slightly different at each run

Hi,
I noticed when I run the following piece of code the model outputs at each time slightly different, what is going on?

import random
import os
import numpy as np
from PIL import Image
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
from torch.autograd import Variable
import utils
import model.resnet as teacher
from torchsummary import summary

normalize = transforms.Normalize(mean=[0.4554, 0.4384, 0.4112],
                             std=[0.2373, 0.2321, 0.2346])
							 
train_transformer = transforms.Compose([
    transforms.RandomResizedCrop(224),
    transforms.RandomHorizontalFlip(),  # randomly flip image horizontally
    transforms.ToTensor(),
    normalize])

trainset = torchvision.datasets.ImageFolder('/cluster1/train/', transform=train_transformer)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=8,
shuffle=False, num_workers=4, pin_memory=1)


base_model = '/base_resnet152/best.pth.tar'

other_model = teacher.ResNet(teacher.Bottleneck, [3,8,36,3], num_classes=1000).cuda()
other_model.eval()			
#for loading parameter from other cluster
utils.load_checkpoint(base_model, other_model)


predicted = np.array([])
legit = np.array([])


teacher_outputs = []
for i, (data_batch, labels_batch) in enumerate(trainloader):
    data_batch, labels_batch = Variable(data_batch.cuda(async=True)),Variable(labels_batch.cuda(async=True))

    output_teacher_batch = other_model(data_batch).data.cpu().numpy()
    teacher_outputs.append(output_teacher_batch)
    legit = np.append(legit, np.argmax(output_teacher_batch, axis=1))
	

np.savetxt('/base_resnet152/legit.txt', legit)

this is part of legit file at first run:
3.390000000000000000e+02
3.390000000000000000e+02
2.740000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.450000000000000000e+02
6.450000000000000000e+02
...

and this is for second run:

3.390000000000000000e+02
3.390000000000000000e+02
**3.460000000000000000e+02**
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
**9.120000000000000000e+02**
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
**2.110000000000000000e+02**
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.390000000000000000e+02
3.450000000000000000e+02
6.450000000000000000e+02
...

as you see some values are different, like those I bolded. I thought maybe this happens because the calculations are in floating point mode, but I’m not sure whether its the case or not. I don’t have any source of randomness in the code so what could be wrong.

apparently the dataloader is not deterministic and in this post shows hows to make it deterministic: