Model keep training none stop

Hi i try to train data so i make small batch to train for time effieient and now my model wont stop training even the batch size it go over batch size can anyone check

def train111(dataloader,net): 
      net = load_net(net, 'gpu')
      net = net.cuda()
      model_name=args.model_name
      

      features = None
  
      epoch = 1
      criterion = nn.CrossEntropyLoss()
      optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
      train_loss = list()
      for i in range (epoch ):
      #set_trace()
        for i, data in enumerate(test):
        
          inps, labs = data
          inps, labs = inps.cuda(args['device']), labs.cuda(args['device'])

          inps = Variable(inps).cuda(args['device'])
          labs = Variable(labs).cuda(args['device'])
          optimizer.zero_grad()
          outs = net(inps.permute(0, 3, 1, 2).float())
          soft_outs = F.softmax(outs, dim=1)
          prds = soft_outs.data.max(1)[1]
          loss = criterion(outs, labs)
          loss.backward()
          optimizer.step()
          prds = prds.cpu().numpy()
          inps_np = inps.detach().cpu().numpy()
          labs_np = labs.detach().cpu().numpy()
          train_loss.append(loss.data.item
                        ())
        
          print('[epoch %d], [iter %d / %d], [train loss %.5f]' % (epoch, i + 1, len(dataloader), np.asarray(train_loss).mean()))
        return net
model_trained111=train111(dataloader,net='mobilefacenet')```

/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/first_stage.py:32: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
img = Variable(torch.FloatTensor(_preprocess(img)), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/first_stage.py:32: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
img = Variable(torch.FloatTensor(_preprocess(img)), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:74: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:74: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:79: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:79: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:120: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:100: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:120: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:100: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:174: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/matlab_cp2tform.py:312: FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass rcond=None, to keep using the old, explicitly pass rcond=-1.
r, _, _, _ = lstsq(X, U)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:174: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/matlab_cp2tform.py:312: FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass rcond=None, to keep using the old, explicitly pass rcond=-1.
r, _, _, _ = lstsq(X, U)
[epoch 1], [iter 1 / 46], [train loss 5.15882]
[epoch 1], [iter 2 / 46], [train loss 5.64264]
[epoch 1], [iter 3 / 46], [train loss 5.46005]
[epoch 1], [iter 4 / 46], [train loss 5.38127]
[epoch 1], [iter 5 / 46], [train loss 5.34429]
[epoch 1], [iter 6 / 46], [train loss 5.47624]
[epoch 1], [iter 7 / 46], [train loss 5.56407]
[epoch 1], [iter 8 / 46], [train loss 5.63598]
[epoch 1], [iter 9 / 46], [train loss 5.69084]
[epoch 1], [iter 10 / 46], [train loss 5.63928]
[epoch 1], [iter 11 / 46], [train loss 5.68155]
[epoch 1], [iter 12 / 46], [train loss 5.71656]
[epoch 1], [iter 13 / 46], [train loss 5.67577]
[epoch 1], [iter 14 / 46], [train loss 5.70586]
[epoch 1], [iter 15 / 46], [train loss 5.67221]
[epoch 1], [iter 16 / 46], [train loss 5.64174]
[epoch 1], [iter 17 / 46], [train loss 5.67100]
[epoch 1], [iter 18 / 46], [train loss 5.69577]
[epoch 1], [iter 19 / 46], [train loss 5.72008]
[epoch 1], [iter 20 / 46], [train loss 5.69191]
[epoch 1], [iter 21 / 46], [train loss 5.71233]
[epoch 1], [iter 22 / 46], [train loss 5.73129]
[epoch 1], [iter 23 / 46], [train loss 5.74838]
[epoch 1], [iter 24 / 46], [train loss 5.76361]
[epoch 1], [iter 25 / 46], [train loss 5.73821]
[epoch 1], [iter 26 / 46], [train loss 5.75314]
[epoch 1], [iter 27 / 46], [train loss 5.76676]
[epoch 1], [iter 28 / 46], [train loss 5.77762]
[epoch 1], [iter 29 / 46], [train loss 5.75498]
[epoch 1], [iter 30 / 46], [train loss 5.76636]
[epoch 1], [iter 31 / 46], [train loss 5.77630]
[epoch 1], [iter 32 / 46], [train loss 5.75762]
[epoch 1], [iter 33 / 46], [train loss 5.76676]
[epoch 1], [iter 34 / 46], [train loss 5.77716]
[epoch 1], [iter 35 / 46], [train loss 5.78547]
[epoch 1], [iter 36 / 46], [train loss 5.79422]
[epoch 1], [iter 37 / 46], [train loss 5.80273]
[epoch 1], [iter 38 / 46], [train loss 5.78626]
[epoch 1], [iter 39 / 46], [train loss 5.79541]
[epoch 1], [iter 40 / 46], [train loss 5.80404]
[epoch 1], [iter 41 / 46], [train loss 5.81178]
[epoch 1], [iter 42 / 46], [train loss 5.81857]
[epoch 1], [iter 43 / 46], [train loss 5.80177]
[epoch 1], [iter 44 / 46], [train loss 5.78725]
[epoch 1], [iter 45 / 46], [train loss 5.79483]
[epoch 1], [iter 46 / 46], [train loss 5.78104]
[epoch 1], [iter 47 / 46], [train loss 5.78869]
[epoch 1], [iter 48 / 46], [train loss 5.79372]
[epoch 1], [iter 49 / 46], [train loss 5.79849]
[epoch 1], [iter 50 / 46], [train loss 5.80376]
[epoch 1], [iter 51 / 46], [train loss 5.79099]
[epoch 1], [iter 52 / 46], [train loss 5.77731]
[epoch 1], [iter 53 / 46], [train loss 5.78287]
[epoch 1], [iter 54 / 46], [train loss 5.78853]
[epoch 1], [iter 55 / 46], [train loss 5.79395]
[epoch 1], [iter 56 / 46], [train loss 5.78287]
[epoch 1], [iter 57 / 46], [train loss 5.77262]
[epoch 1], [iter 58 / 46], [train loss 5.77720]
[epoch 1], [iter 59 / 46], [train loss 5.78294]
[epoch 1], [iter 60 / 46], [train loss 5.78791]
[epoch 1], [iter 61 / 46], [train loss 5.79308]
[epoch 1], [iter 62 / 46], [train loss 5.79769]
[epoch 1], [iter 63 / 46], [train loss 5.80310]
[epoch 1], [iter 64 / 46], [train loss 5.79319]
[epoch 1], [iter 65 / 46], [train loss 5.79729]
[epoch 1], [iter 66 / 46], [train loss 5.80313]
[epoch 1], [iter 67 / 46], [train loss 5.80592]
[epoch 1], [iter 68 / 46], [train loss 5.79605]
[epoch 1], [iter 69 / 46], [train loss 5.80102]
[epoch 1], [iter 70 / 46], [train loss 5.80570]
[epoch 1], [iter 71 / 46], [train loss 5.79511]
[epoch 1], [iter 72 / 46], [train loss 5.79935]
[epoch 1], [iter 73 / 46], [train loss 5.78910]
[epoch 1], [iter 74 / 46], [train loss 5.78059]
[epoch 1], [iter 75 / 46], [train loss 5.78466]
[epoch 1], [iter 76 / 46], [train loss 5.77719]
[epoch 1], [iter 77 / 46], [train loss 5.78120]
[epoch 1], [iter 78 / 46], [train loss 5.77163]
[epoch 1], [iter 79 / 46], [train loss 5.76362]
[epoch 1], [iter 80 / 46], [train loss 5.75510]
[epoch 1], [iter 81 / 46], [train loss 5.74679]
[epoch 1], [iter 82 / 46], [train loss 5.73946]
[epoch 1], [iter 83 / 46], [train loss 5.73348]
[epoch 1], [iter 84 / 46], [train loss 5.72493]
[epoch 1], [iter 85 / 46], [train loss 5.71860]
[epoch 1], [iter 86 / 46], [train loss 5.71163]
[epoch 1], [iter 87 / 46], [train loss 5.71607]
[epoch 1], [iter 88 / 46], [train loss 5.70995]
[epoch 1], [iter 89 / 46], [train loss 5.71393]
[epoch 1], [iter 90 / 46], [train loss 5.70843]
[epoch 1], [iter 91 / 46], [train loss 5.71228]
[epoch 1], [iter 92 / 46], [train loss 5.71657]
[epoch 1], [iter 93 / 46], [train loss 5.71108]
[epoch 1], [iter 94 / 46], [train loss 5.71473]
[epoch 1], [iter 95 / 46], [train loss 5.70708]
[epoch 1], [iter 96 / 46], [train loss 5.70132]
[epoch 1], [iter 97 / 46], [train loss 5.69389]
[epoch 1], [iter 98 / 46], [train loss 5.69753]
[epoch 1], [iter 99 / 46], [train loss 5.70153]
[epoch 1], [iter 100 / 46], [train loss 5.69501]
[epoch 1], [iter 101 / 46], [train loss 5.69840]
[epoch 1], [iter 102 / 46], [train loss 5.69200]
[epoch 1], [iter 103 / 46], [train loss 5.69525]
[epoch 1], [iter 104 / 46], [train loss 5.69976]
[epoch 1], [iter 105 / 46], [train loss 5.70278]
[epoch 1], [iter 106 / 46], [train loss 5.69645]
[epoch 1], [iter 107 / 46], [train loss 5.69948]
[epoch 1], [iter 108 / 46], [train loss 5.70323]
[epoch 1], [iter 109 / 46], [train loss 5.70620]
[epoch 1], [iter 110 / 46], [train loss 5.70145]
[epoch 1], [iter 111 / 46], [train loss 5.70460]
[epoch 1], [iter 112 / 46], [train loss 5.69942]
[epoch 1], [iter 113 / 46], [train loss 5.69274]
[epoch 1], [iter 114 / 46], [train loss 5.69612]
[epoch 1], [iter 115 / 46], [train loss 5.69888]
[epoch 1], [iter 116 / 46], [train loss 5.70156]
[epoch 1], [iter 117 / 46], [train loss 5.70410]
[epoch 1], [iter 118 / 46], [train loss 5.70744]
[epoch 1], [iter 119 / 46], [train loss 5.71084]
[epoch 1], [iter 120 / 46], [train loss 5.71319]
[epoch 1], [iter 121 / 46], [train loss 5.70873]
[epoch 1], [iter 122 / 46], [train loss 5.70469]
[epoch 1], [iter 123 / 46], [train loss 5.69845]
[epoch 1], [iter 124 / 46], [train loss 5.70074]
[epoch 1], [iter 125 / 46], [train loss 5.69645]
[epoch 1], [iter 126 / 46], [train loss 5.69209]
[epoch 1], [iter 127 / 46], [train loss 5.69522]
[epoch 1], [iter 128 / 46], [train loss 5.69079]
[epoch 1], [iter 129 / 46], [train loss 5.68628]
[epoch 1], [iter 130 / 46], [train loss 5.68197]
[epoch 1], [iter 131 / 46], [train loss 5.67804]
[epoch 1], [iter 132 / 46], [train loss 5.68139]
[epoch 1], [iter 133 / 46], [train loss 5.68444]
[epoch 1], [iter 134 / 46], [train loss 5.68102]
[epoch 1], [iter 135 / 46], [train loss 5.68393]
[epoch 1], [iter 136 / 46], [train loss 5.67883]
[epoch 1], [iter 137 / 46], [train loss 5.67466]
[epoch 1], [iter 138 / 46], [train loss 5.66967]
[epoch 1], [iter 139 / 46], [train loss 5.66591]
[epoch 1], [iter 140 / 46], [train loss 5.66917]
[epoch 1], [iter 141 / 46], [train loss 5.67119]
[epoch 1], [iter 142 / 46], [train loss 5.67408]
[epoch 1], [iter 143 / 46], [train loss 5.67000]
[epoch 1], [iter 144 / 46], [train loss 5.67191]
[epoch 1], [iter 145 / 46], [train loss 5.67482]
[epoch 1], [iter 146 / 46], [train loss 5.67665]
[epoch 1], [iter 147 / 46], [train loss 5.67898]
[epoch 1], [iter 148 / 46], [train loss 5.67415]
[epoch 1], [iter 149 / 46], [train loss 5.67052]
[epoch 1], [iter 150 / 46], [train loss 5.66731]
[epoch 1], [iter 151 / 46], [train loss 5.66909]
[epoch 1], [iter 152 / 46], [train loss 5.66538]
[epoch 1], [iter 153 / 46], [train loss 5.66711]
[epoch 1], [iter 154 / 46], [train loss 5.66880]
[epoch 1], [iter 155 / 46], [train loss 5.67044]
[epoch 1], [iter 156 / 46], [train loss 5.67296]
[epoch 1], [iter 157 / 46], [train loss 5.67545]
[epoch 1], [iter 158 / 46], [train loss 5.67231]
[epoch 1], [iter 159 / 46], [train loss 5.67474]
[epoch 1], [iter 160 / 46], [train loss 5.67156]
[epoch 1], [iter 161 / 46], [train loss 5.66820]
[epoch 1], [iter 162 / 46], [train loss 5.66953]
[epoch 1], [iter 163 / 46], [train loss 5.66623]
[epoch 1], [iter 164 / 46], [train loss 5.66320]
[epoch 1], [iter 165 / 46], [train loss 5.66583]
[epoch 1], [iter 166 / 46], [train loss 5.66854]
[epoch 1], [iter 167 / 46], [train loss 5.67105]
[epoch 1], [iter 168 / 46], [train loss 5.67362]
[epoch 1], [iter 169 / 46], [train loss 5.66926]
[epoch 1], [iter 170 / 46], [train loss 5.66649]
[epoch 1], [iter 171 / 46], [train loss 5.66757]
[epoch 1], [iter 172 / 46], [train loss 5.66978]
[epoch 1], [iter 173 / 46], [train loss 5.66536]
[epoch 1], [iter 174 / 46], [train loss 5.66098]
[epoch 1], [iter 175 / 46], [train loss 5.66350]
[epoch 1], [iter 176 / 46], [train loss 5.66082]
[epoch 1], [iter 177 / 46], [train loss 5.66255]
[epoch 1], [iter 178 / 46], [train loss 5.66390]
[epoch 1], [iter 179 / 46], [train loss 5.66050]
[epoch 1], [iter 180 / 46], [train loss 5.66142]
[epoch 1], [iter 181 / 46], [train loss 5.65733]
[epoch 1], [iter 182 / 46], [train loss 5.65963]
[epoch 1], [iter 183 / 46], [train loss 5.66245]
[epoch 1], [iter 184 / 46], [train loss 5.66491]
[epoch 1], [iter 185 / 46], [train loss 5.66181]
[epoch 1], [iter 186 / 46], [train loss 5.65937]
[epoch 1], [iter 187 / 46], [train loss 5.65667]
[epoch 1], [iter 188 / 46], [train loss 5.65498]
[epoch 1], [iter 189 / 46], [train loss 5.65703]
[epoch 1], [iter 190 / 46], [train loss 5.65468]
[epoch 1], [iter 191 / 46], [train loss 5.65216]
[epoch 1], [iter 192 / 46], [train loss 5.64964]
[epoch 1], [iter 193 / 46], [train loss 5.64714]
[epoch 1], [iter 194 / 46], [train loss 5.64469]
[epoch 1], [iter 195 / 46], [train loss 5.64259]
[epoch 1], [iter 196 / 46], [train loss 5.64477]
[epoch 1], [iter 197 / 46], [train loss 5.64085]
[epoch 1], [iter 198 / 46], [train loss 5.64281]
[epoch 1], [iter 199 / 46], [train loss 5.64357]
[epoch 1], [iter 200 / 46], [train loss 5.64432]
[epoch 1], [iter 201 / 46], [train loss 5.64584]
[epoch 1], [iter 202 / 46], [train loss 5.64703]
[epoch 1], [iter 203 / 46], [train loss 5.64933]
[epoch 1], [iter 204 / 46], [train loss 5.65146]
[epoch 1], [iter 205 / 46], [train loss 5.65343]
[epoch 1], [iter 206 / 46], [train loss 5.65101]
[epoch 1], [iter 207 / 46], [train loss 5.64879]
[epoch 1], [iter 208 / 46], [train loss 5.64652]
[epoch 1], [iter 209 / 46], [train loss 5.64710]
[epoch 1], [iter 210 / 46], [train loss 5.64494]
[epoch 1], [iter 211 / 46], [train loss 5.64680]
[epoch 1], [iter 212 / 46], [train loss 5.64859]
[epoch 1], [iter 213 / 46], [train loss 5.64499]
[epoch 1], [iter 214 / 46], [train loss 5.64129]
[epoch 1], [iter 215 / 46], [train loss 5.64179]
[epoch 1], [iter 216 / 46], [train loss 5.63810]
[epoch 1], [iter 217 / 46], [train loss 5.63616]
[epoch 1], [iter 218 / 46], [train loss 5.63248]
[epoch 1], [iter 219 / 46], [train loss 5.63075]
[epoch 1], [iter 220 / 46], [train loss 5.63314]
[epoch 1], [iter 221 / 46], [train loss 5.63499]
[epoch 1], [iter 222 / 46], [train loss 5.63282]
[epoch 1], [iter 223 / 46], [train loss 5.63069]
[epoch 1], [iter 224 / 46], [train loss 5.63275]
[epoch 1], [iter 225 / 46], [train loss 5.63468]
[epoch 1], [iter 226 / 46], [train loss 5.63695]
[epoch 1], [iter 227 / 46], [train loss 5.63905]
[epoch 1], [iter 228 / 46], [train loss 5.63705]
[epoch 1], [iter 229 / 46], [train loss 5.63912]
[epoch 1], [iter 230 / 46], [train loss 5.63712]
[epoch 1], [iter 231 / 46], [train loss 5.63895]
[epoch 1], [iter 232 / 46], [train loss 5.63683]
[epoch 1], [iter 233 / 46], [train loss 5.63483]
[epoch 1], [iter 234 / 46], [train loss 5.63609]
[epoch 1], [iter 235 / 46], [train loss 5.63810]
[epoch 1], [iter 236 / 46], [train loss 5.63984]
[epoch 1], [iter 237 / 46], [train loss 5.63673]
[epoch 1], [iter 238 / 46], [train loss 5.63464]
[epoch 1], [iter 239 / 46], [train loss 5.63659]
[epoch 1], [iter 240 / 46], [train loss 5.63457]
[epoch 1], [iter 241 / 46], [train loss 5.63610]
[epoch 1], [iter 242 / 46], [train loss 5.63365]
[epoch 1], [iter 243 / 46], [train loss 5.63121]
[epoch 1], [iter 244 / 46], [train loss 5.62911]
[epoch 1], [iter 245 / 46], [train loss 5.62714]
[epoch 1], [iter 246 / 46], [train loss 5.62517]
[epoch 1], [iter 247 / 46], [train loss 5.62275]
[epoch 1], [iter 248 / 46], [train loss 5.62078]
[epoch 1], [iter 249 / 46], [train loss 5.62231]
[epoch 1], [iter 250 / 46], [train loss 5.62393]
[epoch 1], [iter 251 / 46], [train loss 5.62572]
[epoch 1], [iter 252 / 46], [train loss 5.62235]
[epoch 1], [iter 253 / 46], [train loss 5.62350]
[epoch 1], [iter 254 / 46], [train loss 5.62514]
[epoch 1], [iter 255 / 46], [train loss 5.62367]
[epoch 1], [iter 256 / 46], [train loss 5.62034]
[epoch 1], [iter 257 / 46], [train loss 5.61872]
[epoch 1], [iter 258 / 46], [train loss 5.61727]

In your code snippet you are using test to get the data instead of the dataloader:

for i, data in enumerate(test):

which seems to be globally defined and might contain more samples that the wanted dataloader.