Hitting Recursion Limit while doing a linear regression problem

Before I start, I should mention that I’ve just started learning PyTorch, so while I understand that I don’t need PyTorch for regression, I thought it would be a good idea to get some practice.

So, I have been using this dataset. What I’m trying to do is to use the columns excluding the serial number and ‘Chance of Admit’ to determine the likelihood of getting into grad school.

I created a model with a single layer and a Functional sigmoid output. I split the data into 400 and 100 samples to create the training and test sets. When I start training my model, I keep running into RecursionErrors and I can’t really figure out why.

#!/usr/bin/env python
# coding: utf-8

# In[1]:


import pandas as pd
import torch
from torch import optim
from torchvision import datasets, models, transforms
import numpy as np
import scipy
from matplotlib import pyplot as plt
import torch.nn.functional as F

from torch import nn


# In[3]:


data = pd.read_csv('Admission_Predict_Ver1_1.csv')
data.drop(columns='Serial No.',inplace = True)


# In[30]:

# Converting to a numpy array
data = np.array(data)


# In[31]:


class Net(nn.Module):
  def __init__(self, inputs, outputs):
    super().__init__()
    self.forward = nn.Linear(inputs,outputs)
  def forward(self,x):
    x = F.sigmoid(self.forward(x))
    return x


# In[8]:

#Shuffling data manually instead of using Data Loader
index = np.arange(500)
np.random.shuffle(index)


# In[14]:

#Constructing training and testing datasets from shuffled indices
train_data = np.array([data[i,:-1] for i in index[:400]])
train_decision = np.array([data[i,-1] for i in index[:400]])

test_data = np.array([data[i,:-1] for i in index[-100:]])
test_decision = np.array([data[i,-1] for i in index[-100:]])


# In[15]:

#Creating Model object and loss functions
dime = np.shape(train_data)
model = Net(dime[1],1)

optimiser = torch.optim.SGD(model.parameters(), lr = 0.001)
criterion = nn.CrossEntropyLoss()


# In[16]:

#Converting to torch tensors
train_data = torch.from_numpy(train_data)
train_decision = torch.from_numpy(train_decision)


# In[32]:

#Training
epochs = 50
for d in range(epochs):
  print(d)
  optimiser.zero_grad()
  outputs = model.forward(train_data)
  loss = criterion(outputs, train_decision)
  loss.backward()# back props
  optimiser.step()# update the parameters

Any help is appreciated.

You cannot name the layer self.forward as that’s then overridden by the function you define.

Best regards

Thomas

Hi, tom. I changed that bit, but now, it shows that the function expects a Float rather than a double.

Your model is defined in float 32 but your data is float 64. Convert the torch tensor by using .float() and it should work