Why my training not continue

so i got training model and when i try to train it doesnt look like epoc like with another one or this just im being paranoid

def train16(dataloader,net): 
  if __name__ == "__main__":
      input = dataloadertrain2 
      net = net()

      net = net.cuda()   
      epoch = 2
      criterion = nn.CrossEntropyLoss()
      optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
      train_loss = list()
      #set_trace()
      for i in range(epoch):
        for i, data in enumerate(dataloader):
        
          inps, labs = data
          inps, labs = inps.cuda(args['device']), labs.cuda(args['device'])

          inps = Variable(inps).cuda(args['device'])
          labs = Variable(labs).cuda(args['device'])
          optimizer.zero_grad()
          outs = net(inps.permute(0, 3, 1, 2).float())
          soft_outs = F.softmax(outs, dim=1)
          prds = soft_outs.data.max(1)[1]
          loss = criterion(outs, labs)
          loss.backward()
          optimizer.step()
          prds = prds.cpu().numpy()
          inps_np = inps.detach().cpu().numpy()
          labs_np = labs.detach().cpu().numpy()
          train_loss.append(loss.data.item
                          ())
         
          print('[epoch %d], [iter %d / %d], [train loss %.5f]' % (epoch, i + 1, len(dataloader), np.asarray(train_loss).mean()))

model_trained11=train16(dataloader,net=gface3)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/first_stage.py:32: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img = Variable(torch.FloatTensor(_preprocess(img)), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/first_stage.py:32: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img = Variable(torch.FloatTensor(_preprocess(img)), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:74: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:74: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:79: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:120: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:100: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:174: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:79: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/matlab_cp2tform.py:312: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  r, _, _, _ = lstsq(X, U)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:120: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:100: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:174: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/matlab_cp2tform.py:312: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  r, _, _, _ = lstsq(X, U)
[epoch 2], [iter 1 / 46], [train loss 5.71282]
[epoch 2], [iter 2 / 46], [train loss 5.83997]
[epoch 2], [iter 3 / 46], [train loss 6.46412]
[epoch 2], [iter 4 / 46], [train loss 6.46225]
[epoch 2], [iter 5 / 46], [train loss 6.62279]
[epoch 2], [iter 6 / 46], [train loss 6.44779]
[epoch 2], [iter 7 / 46], [train loss 6.69935]
[epoch 2], [iter 8 / 46], [train loss 6.68123]
[epoch 2], [iter 9 / 46], [train loss 6.54707]
[epoch 2], [iter 10 / 46], [train loss 6.57934]
[epoch 2], [iter 11 / 46], [train loss 6.71634]
[epoch 2], [iter 12 / 46], [train loss 6.78498]
[epoch 2], [iter 13 / 46], [train loss 6.64164]
[epoch 2], [iter 14 / 46], [train loss 6.64319]
[epoch 2], [iter 15 / 46], [train loss 6.61378]
[epoch 2], [iter 16 / 46], [train loss 6.51260]
[epoch 2], [iter 17 / 46], [train loss 6.49001]
[epoch 2], [iter 18 / 46], [train loss 6.43324]
[epoch 2], [iter 19 / 46], [train loss 6.38120]
[epoch 2], [iter 20 / 46], [train loss 6.30628]
[epoch 2], [iter 21 / 46], [train loss 6.24700]
[epoch 2], [iter 22 / 46], [train loss 6.21136]
[epoch 2], [iter 23 / 46], [train loss 6.29801]
[epoch 2], [iter 24 / 46], [train loss 6.24169]
[epoch 2], [iter 25 / 46], [train loss 6.20410]
[epoch 2], [iter 26 / 46], [train loss 6.26529]
[epoch 2], [iter 27 / 46], [train loss 6.21770]
[epoch 2], [iter 28 / 46], [train loss 6.24064]
[epoch 2], [iter 29 / 46], [train loss 6.28098]
[epoch 2], [iter 30 / 46], [train loss 6.24097]
[epoch 2], [iter 31 / 46], [train loss 6.21935]
[epoch 2], [iter 32 / 46], [train loss 6.23223]
[epoch 2], [iter 33 / 46], [train loss 6.19074]
[epoch 2], [iter 34 / 46], [train loss 6.15338]
[epoch 2], [iter 35 / 46], [train loss 6.12803]
[epoch 2], [iter 36 / 46], [train loss 6.09015]
[epoch 2], [iter 37 / 46], [train loss 6.11180]
[epoch 2], [iter 38 / 46], [train loss 6.07576]
[epoch 2], [iter 39 / 46], [train loss 6.04043]
[epoch 2], [iter 40 / 46], [train loss 6.03682]
[epoch 2], [iter 41 / 46], [train loss 6.04288]
[epoch 2], [iter 42 / 46], [train loss 6.01584]
[epoch 2], [iter 43 / 46], [train loss 5.99869]
[epoch 2], [iter 44 / 46], [train loss 5.96833]
[epoch 2], [iter 45 / 46], [train loss 5.99789]
[epoch 2], [iter 46 / 46], [train loss 6.03659]
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/first_stage.py:32: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img = Variable(torch.FloatTensor(_preprocess(img)), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/first_stage.py:32: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img = Variable(torch.FloatTensor(_preprocess(img)), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:74: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:74: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:79: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:120: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:79: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:100: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:174: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/matlab_cp2tform.py:312: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  r, _, _, _ = lstsq(X, U)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:120: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/detector.py:100: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  img_boxes = Variable(torch.FloatTensor(img_boxes), volatile=True)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/mtcnn_network/get_nets.py:174: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
  a = F.softmax(a)
/content/drive/My Drive/recfaces13/recfaces/preprocessing/matlab_cp2tform.py:312: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  r, _, _, _ = lstsq(X, U)
[epoch 2], [iter 1 / 46], [train loss 6.01779]
[epoch 2], [iter 2 / 46], [train loss 5.98528]
[epoch 2], [iter 3 / 46], [train loss 5.97187]
[epoch 2], [iter 4 / 46], [train loss 5.96652]
[epoch 2], [iter 5 / 46], [train loss 5.96503]
[epoch 2], [iter 6 / 46], [train loss 5.93626]
[epoch 2], [iter 7 / 46], [train loss 5.97201]
[epoch 2], [iter 8 / 46], [train loss 5.94719]
[epoch 2], [iter 9 / 46], [train loss 5.93310]
[epoch 2], [iter 10 / 46], [train loss 5.94464]
[epoch 2], [iter 11 / 46], [train loss 5.95638]
[epoch 2], [iter 12 / 46], [train loss 5.96451]
[epoch 2], [iter 13 / 46], [train loss 5.93696]
[epoch 2], [iter 14 / 46], [train loss 5.93060]
[epoch 2], [iter 15 / 46], [train loss 5.90198]
[epoch 2], [iter 16 / 46], [train loss 5.88471]
[epoch 2], [iter 17 / 46], [train loss 5.86594]
[epoch 2], [iter 18 / 46], [train loss 5.84602]
[epoch 2], [iter 19 / 46], [train loss 5.83361]
[epoch 2], [iter 20 / 46], [train loss 5.80993]
[epoch 2], [iter 21 / 46], [train loss 5.78846]
[epoch 2], [iter 22 / 46], [train loss 5.76547]
[epoch 2], [iter 23 / 46], [train loss 5.79945]
[epoch 2], [iter 24 / 46], [train loss 5.77750]
[epoch 2], [iter 25 / 46], [train loss 5.75367]
[epoch 2], [iter 26 / 46], [train loss 5.75575]
[epoch 2], [iter 27 / 46], [train loss 5.75985]
[epoch 2], [iter 28 / 46], [train loss 5.75256]
[epoch 2], [iter 29 / 46], [train loss 5.76749]
[epoch 2], [iter 30 / 46], [train loss 5.74745]
[epoch 2], [iter 31 / 46], [train loss 5.72347]
[epoch 2], [iter 32 / 46], [train loss 5.70199]
[epoch 2], [iter 33 / 46], [train loss 5.67859]
[epoch 2], [iter 34 / 46], [train loss 5.65563]
[epoch 2], [iter 35 / 46], [train loss 5.63281]
[epoch 2], [iter 36 / 46], [train loss 5.61013]
[epoch 2], [iter 37 / 46], [train loss 5.60708]
[epoch 2], [iter 38 / 46], [train loss 5.58510]
[epoch 2], [iter 39 / 46], [train loss 5.56367]
[epoch 2], [iter 40 / 46], [train loss 5.54706]
[epoch 2], [iter 41 / 46], [train loss 5.53094]
[epoch 2], [iter 42 / 46], [train loss 5.51055]
[epoch 2], [iter 43 / 46], [train loss 5.50077]
[epoch 2], [iter 44 / 46], [train loss 5.48100]
[epoch 2], [iter 45 / 46], [train loss 5.48837]
[epoch 2], [iter 46 / 46], [train loss 5.48579]

just i feel like the model just restart and train again am i wrong or am i right

The output doesn’t necessarily indicate that the training is restarting, as the loss value seems to be stuck at ~6, so you might want to play around with some hyperparameters of your training, such as the learning rate etc.