ZeroDivisionError: division by zero

Hi, everyone, I have an error with this code:

dataset=pd.read_csv(‘out.csv’,delimiter=’,’,skiprows=1, squeeze=True)
x=dataset.iloc[:,0:4]
y=dataset.iloc[:,[4]]

x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x, y,
test_size=0.2,
random_state=42)
train_set=x_train,y_train
test_set=x_test,y_test

print(train_set)
print(test_set)

train_loader = DataLoader(dataset=train_set,batch_size=batch_size,drop_last=True, shuffle=False)

test_loader = DataLoader(dataset=test_set ,batch_size=batch_size, drop_last=True, shuffle=False)

t=Transformer(dim_val,dim_attn,input_size,dec_seq_len,out_seq_len,n_decoder_layers,n_encoder_layers,n_heads)
model=t
model.to(device)
optimizer=torch.optim.Adam(model.parameters(),lr=lr)

losses=[]

for b ,(inputs,labels) in enumerate (train_loader):

     inputs = inputs.to(device=device)         
     inputs=torch.tensor(inputs)
     scores = model(inputs)                  
     criterion = nn.MSELoss()                             
 
     loss=criterion(scores,labels)

     losses.append(loss.items())

     optimizer.zero_grad()
     loss.backward()
     optimizer.step()

“the error in this line below”
print(f"Cost at epoch {epochs} is {sum(losses)/len(losses)}")

def check_accuracy(loader, model):
num_correct = 0
num_samples = 0
model.eval()

with torch.no_grad():
    for z, y in loader:
        z = z.to(device=device)
        y = y.to(device=device)                          
        scores = model(z)
        _, predictions = scores.max()               
        num_correct+=(predictions==y).sum(1)
        num_correct=torch.tensor(num_correct)

        num_samples += predictions.size(0)
                
    print(
         f"Got {num_correct} / {num_samples} with accuracy {float(num_correct)/float(num_samples)*100:.2f}"
     )

model.train()

print(“Checking accuracy on Training Set”)
check_accuracy(train_loader, model)

print(“Checking accuracy on Test Set”)
check_accuracy(test_loader, model)

Hi Imad!

This suggests that losses is of zero length. Try printing out len (losses)
just before the division to check.

If this is the case, it would suggest that your train_loader is “empty” for
some reason.

Try adding a print statement, e.g., print (b), at the top of your
train_loader loop, just after the for b ... line. Is the body of the loop
ever executed?

If train_loader is empty, you will have to work backward to figure out why.

Best.

K. Frank

Thanks for replying. yes, I checked before it’s empty. I have two for loops the first one work, but I forget to add it to the code the second one not working its jump directly to the print statement and give zero result.
for epoch in range(epochs):
losses=[]
for inputs,labels in enumerate (train_loader):
inputs = inputs.to(device=device)
labels = labels.to(device=device)
inputs.clone().detach()
labels.clone().detach()
scores = model(inputs)
loss=criterion(scores,labels)

      losses.append(loss.item())

      optimizer.zero_grad()
    
      loss.backward()
      optimizer.step()

print(f"Cost at epoch {epoch} is {sum(losses)/len(losses)}")

The super class of ZeroDivisionError is ArithmeticError. This exception raised when the second argument of a division or modulo operation is zero. In Mathematics, when a number is divided by a zero, the result is an infinite number. It is impossible to write an Infinite number physically. Python interpreter throws “ZeroDivisionError: division by zero” error if the result is infinite number. While implementing any program logic and there is division operation make sure always handle ArithmeticError or ZeroDivisionError so that program will not terminate.

try:
    z = x / y
except ZeroDivisionError:
    z = 0

Or check before you do the division:

if y == 0:
    z = 0
else:
    z = x / y

The latter can be reduced to:

z = 0 if y == 0 else (x / y)

hello ive tried this and it wont work for me I’m getting zero error at pred_accuracy I’ve attached the code below
“”"

correct_pred = 0
almost_correct_pred = 0
total_predict = 0

pred = np.array([1,3])

#set test folder path
test_folder = dataset + ‘/Test’

#loop over each .bmp file in the test folder and pass it through the network
for test_im in glob(test_folder + ‘/*.bmp’):

#get actual temp from filename
temp = os.path.basename(test_im).split('_')[2]

#pass image through the model to get predicted temp and confidence
prediction, conf = predict(trained_model, test_im, 3)

#change the indices back to class labels
prediction = [idx_to_class[i] for i in prediction]

#create array of actual temp, predicted temp and confidence
for i in range(len(prediction)):
    
    row = np.transpose(np.array([temp, prediction[i], conf[i]]))

    pred = np.vstack((pred, row))

#print actual temp, predicted temp and confidence
print('True Label: {true_label}, Predicted Label: {predicted_label} with confidence of {confidence}\n'.format(true_label=temp, predicted_label=prediction[0], confidence=conf[0]))


#calculate ratio of correct and almost correct (within 50C) predictions
total_predict += 1


if temp == prediction[0]:
    
    correct_pred += 1

try:

    if int(temp) == int(prediction[0]) + 50 or int(temp) == int(prediction[0]) - 50:
        
        almost_correct_pred += 1
except ValueError:
    
    temp = temp[:-1]
    
    if int(temp) == int(prediction[0][:-1]) + 50 or int(temp) == int(prediction[0][:-1]) - 50:
        
        almost_correct_pred += 1

pred_acc = correct_pred / total_predict
almost_correct_acc = (almost_correct_pred + correct_pred) / total_predict

print(‘{correct}/{total} images predicted correctly ({percentage}%)’.format(correct=correct_pred, total=total_predict, percentage=str(pred_acc100)[:5]))
print(‘{correct}/{total} images almost predicted correctly ({percentage}%)’.format(correct=almost_correct_pred+correct_pred, total=total_predict, percentage=str(almost_correct_acc
100)[:5]))

pred = np.delete(pred, 0, axis=0)
pred = pred.astype(‘float’)

“”“”

can someone help me with this I am really confused haha

Could you check which line of code is raising the error? I would guess total_predict might be set to 0 so could you check if that’s indeed the case?

This code works perfectly fine on pretrained Alex net model. This issue arises when I implement my own small CNN model the code is working fine I’m getting satisfactory accuracy and losses but for some reason it’s failing to compute predictions at Pred_acc where total correction is initialsed by correction = 0 then full_correction += 1 in a if loop. I’m not sure what’s going on it’s as if it’s not reading my statements in the loop when I’m computing predictions

I’m unsure where the correction variable is used but check which variable is set to zero and raises the error. Once you’ve narrowed it down check why it’s not updated as you are expecting.