Why is my RNN erroring out at loss storage container?

Hello,

I’m getting the below error and I’m not quite sure why.

Traceback (most recent call last):
  File "korea_lstm.py", line 134, in <module>
    y_test)
  File "korea_lstm.py", line 105, in full_gd
    train_losses = np.zeros(epochs)
ValueError: maximum supported dimension for an ndarray is 32, found 373482

I thought it may be a GPU thing but when I take the GPU off as an option, I get the same error.

The below is my model:

def full_gd(model,
            X_train,
            y_train,
            X_test, 
            y_test,
            epochs=200):
    
    train_losses = np.zeros(epochs)
    test_losses = np.zeros(epochs)
    
    for it in range(epochs):
        optimzer.zero_grad()
        
        outputs = model(X_train)
        loss = mape_loss(outputs, y_train)
        
        loss.backwards()
        optimizer.step()
        
        train_losses[it] =loss.item()
        
        test_outputs = model(X_test)
        test_loss = mape_loss(test_outputs, y_test)
        test_losses[it] = test_loss.item()
        
        if (it+1) % 5 == 0:
            print(f"Epoch {it+1}/{epochs}, Train Loss: {loss.item():.4f},Test Loss: {test_loss.item():.4f}")
            
        return train_loss, test_loss

I’m nots sure why it’s getting held up at the np.zeros command. Before trying an RNN, I ran this as an auto regressive model and had no issues. Is there anything that stands out as glaringly wrong?