Overfitting problem

Hi! I wrote a network an I want to check it. I use one batch to see if it overfit or not… I plot loss during training … is this right?

model.to(device)
optimizer=torch.optim.SGD(model.parameters(),lr=0.00001)
Loss=np.array([])
for epoc in range(50):
    print("***********",epoc)
    for iteration,(x,r) in enumerate(Train_Loader):
      optimizer.zero_grad()
      y=Block(x,PHI)
      output,out = model(r.to(device),y.to(device),PHI)
      loss = my_loss(output.to(device), y.to(device),1,out.to(device))
      Loss=np.append(Loss,[loss])
      loss.backward()
      optimizer.step()
      end=time.time()
      period=end-start
      print('period', period)
      start=time.time()

image
final loss is around 100000… I think its wrong…

Correct me if I am wrong here. I think overfitting means that your model output a small loss on training data, but once you migrate to testing data, the loss is much higher. This means that your model learns a lot of details inside training data, but it is “over” fit for general cases, such as testing data.