Error: forward() missing 1 required positional argument: 'x'

Hello, I am quite new to pytorch. This is my first try and I am getting an error named forward() missing 1 required positional argument: ‘x’ and I don’t know why. If somebody could help me that would be appreciated. Here is the code:

First I load the data in and afterwards I make a directory to make the data a .pt tensor:

try:
    os.mkdir('data')
except:
    print('Directory exists')

# Save the files with leading zeros (e.g., ID00001.pt) and save in each file all three frequencies
# Also, create a text file that lists all the files available (all data). This is used to sort the dataset in the different sets
decimal_places = 6
os.path.join('ID_list.txt') #hier stond eerst os.remove()
ID_list = open('ID_list.txt','a') 
for sample in range(T20MHz.shape[1]):
    name = 'ID' + str(sample+1).zfill(decimal_places) +'.pt'
    ID_list.write(name + '\n')
    torch.save([T20MHz[:,sample], T100MHz[:,sample], T250MHz[:,sample]], 'data/' + name)
ID_list.close()

Then I load the data for trainign and validation:

import torch
from torch.utils import data

class Dataset(data.Dataset):
    
    'Characterizes a dataset for PyTorch'
    def __init__(self, list_IDs):
          'Initialization'
          self.list_IDs = list_IDs
    
    def __len__(self):
          'Denotes the total number of samples'
          return len(self.list_IDs)
    
    def __getitem__(self, index):
          'Generates one sample of data'
          # Select sample
          ID = self.list_IDs[index]
    
          # Load data and get label
          sample = torch.load('data/' + ID)
          T20MHz = torch.DoubleTensor(sample[0]).detach()
          T100MHz = torch.DoubleTensor(sample[1]).detach()
          T250MHz = torch.DoubleTensor(sample[2]).detach()
          return T20MHz, T100MHz, T250MHz
      

Then I define my CNN:

import torch.nn as nn
import torch.nn.functional as F

class model(nn.Module):
   def __init__(self):
       super(model, self).__init__()
       #1 input trace channel, 18 output channels and 3x3 square (?) convolution kernel        
       self.conv1 = nn.Conv1d(1, 3, kernel_size=3) 
       self.conv2 = nn.Conv1d(3, 6, kernel_size=6) 
       self.conv3 = nn.Conv1d(6, 12, kernel_size=12) 
       self.conv4 = nn.Conv1d(12,24, kernel_size=24) 
       self.conv5 = nn.Conv1d(24,48, kernel_size = 48)
       self.conv_drop = nn.Dropout2d()
       self.conv6_transpose = nn.ConvTranspose1d(48,24, kernel_size = 48)
       self.conv7_transpose = nn.ConvTranspose1d(24,12, kernel_size = 24)
       self.conv8_transpose = nn.ConvTranspose1d(12,6, kernel_size = 12)
       self.conv9_transpose=nn.ConvTranspose1d(6,3, kernel_size = 6)
       self.conv10_transpose = nn.ConvTranspose1d(3,1, kernel_size = 3)
      
      #self.fc1 = nn.Linear(1536, 72) #Fully-connected classifier layer
      #self.fc2 = nn.Linear(72, 19) #Fully-connected classifier layer
       self.bn1 = nn.BatchNorm1d(3)
       self.bn2 = nn.BatchNorm1d(6)
       self.bn3 = nn.BatchNorm1d(12)
       self.bn4 = nn.BatchNorm1d(24)
       self.bn5 = nn.BatchNorm1d(48)
       self.bn6 = nn.BatchNorm1d(24)
       self.bn7 = nn.BatchNorm1d(12)
       self.bn8 = nn.BatchNorm1d(6)
       self.bn9 = nn.BatchNorm1d(3)
       self.bn10 = nn.BatchNorm1d(1)
      
   
   def forward(self, x):
       #x = x.unsqueeze(1) # create an additional channel dimension (only 1 channel here).. We have a single sample, so add a fake batch dimension
      #downsample
       out = self.conv1(x)
       out = F.relu(out)
       out = self.bn1(out)
       
       out = self.conv2(out)
       out = F.relu(out)
       out = self.bn2(out)
       
       out = self.conv3(out)
       out = F.relu(out)
       out = self.bn3(out)
       
       out = self.conv4(out)
       out = F.relu(out)
       out = self.bn4(out)
       
       #bottleneck
       out = self.conv5(out)
       out = F.relu(out)
       out = self.bn5(out)
       out = self.conv_drop(out)
       
       #upsample
       out = self.conv6_transpose(out)
       out = self.conv6_drop(out)
       out = F.relu(out)
       out = out.unsqueeze(1)
       out = self.bn6(out)
       
       out = self.conv7_transpose(out)
       out = self.conv_drop(out)
       out = F.relu(out)
       out = out.unsqueeze(1)
       out = self.bn7(out)
       
       out = self.conv8_transpose(out)
       out = self.conv_drop(out)
       out = F.relu(out)
       out = out.unsqueeze(1)
       out = self.bn8(out)
       
       out = self.conv9_transpose(out)
       out = self.conv_drop(out)
       out = F.relu(out)
       out = out.unsqueeze(1)
       out = self.bn9(out)
       
       out = self.conv10_transpose(out)
       out = self.conv_drop(out)
       out = F.relu(out)
       out = out.unsqueeze(1)
       out = self.bn10(out)
       
       return out
       
       
model = model()

print(model)

I set my parameters and optimizer:

import torch
from torch.utils import data
import numpy as np
import torch.optim as optim


# Check if CUDA is available
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
# cudnn.benchmark = True


# Set training parameters
params = {'batch_size': 64,
          'shuffle': True,
          'num_workers': 6}
max_epochs = 100

# Load all the data from the txt file
file_IDs = open('ID_list.txt','r').read().split('\n')
file_IDs = file_IDs[:-1] # remove last line
complete_dataset = Dataset(file_IDs)


#%% Here we define a loss function and the optimizer
# create your loss function
def rmse(y, y_hat):
    """Compute root mean squared error"""
    return torch.sqrt(torch.mean((y - y_hat).pow(2)))

# create your optimizer
optimizer = optim.SGD(model.parameters(), lr=0.0003, momentum = 0.1)

And try to train it/test if it works in the first place

# Divide the dataset into the training and validation set
lengths = [int(np.ceil(len(complete_dataset)*0.8)), int(np.floor(len(complete_dataset)*0.1)), int(np.floor(len(complete_dataset)*0.1))]
training_set, validation_set, evaluation_set = torch.utils.data.random_split(complete_dataset, lengths)
training_generator = data.DataLoader(training_set, **params)
validation_generator = data.DataLoader(validation_set, **params)
evaluation_generator = data.DataLoader(evaluation_set, **params)


# instantiate the model to make it a double tensor
forward_model = model().double()
t20, t100 ,t250 = next(iter(training_generator))
one_prediction = forward_model(t20)

THE ERROR SEEMS TO APPEAR HERE IN THE LAST STEP.

I don’t know what to fizx so if anyone can help it would be lovely.
Thank!

Hi,

I edited your post to make the code a bit more readable :slight_smile:

I think the main confusion comes from the fact that your class name is model and the name of the instance you create is model as well. I am pretty sure that you call forward_model = model().double() with model already being an instance and not the class.
You can rename the class MyModel or something and it should make it clearer.

1 Like

Hello!

Thank you for making it look nice and answering :slight_smile:
That was indeed the main confusion!
However, the last two lines shown on the code I posted here still give an error, somethin with broken pipe :stuck_out_tongue:
Do you also have any idea on that one?

Thanks so much.

1 Like

Broken pipe? Do you use multiprocessing?

What is the full error message?

The idea is to eventually run the code on a cluster, yes. This was the error:

ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe

Not sure if it is pytorch related…
Maybe try using the spawn method ?

Maybe I will try running it on the cluster now since the error is solved. And then see what happens.
Thanks for the tips at least, I’m learning about Pytorch so good to know about the name of the classes etc :wink:
Thanks!

1 Like

thanks your answer is very helpful, maybe I’ll really slam my laptop if I do not find the answer​:joy::relieved: