Max_pool2d(): argument 'input' (position 1) must be Tensor, not ReLU

I am running this neural Network

class MultiLabelNN(nn.Module):
def init(self):
super(MultiLabelNN, self).init()
self.conv1 = nn.Conv2d(3,64, 5)
self.pool = nn.MaxPool2d(2,2)
self.conv2 = nn.Conv2d(64, 128, 5)
self.conv3 = nn.Conv2d(128, 256, 5)
self.conv4 = nn.Conv2d(256,320,5)
self.fc1 = nn.Linear(250880,2048)
self.fc2 = nn.Linear(2048, 1024)
self.fc3 = nn.Linear(1024, 512)
self.fc4 = nn.Linear(512, 6)
def forward(self, x):
x = self.conv1(x)
x = nn.ReLU(x)
x = self.pool(x)
x = self.conv2(x)
x = nn.ReLU(x)
x = self.pool(x)
x = self.conv3(x)
x = nn.ReLU(x)
x = self.pool(x)
x = self.conv4(x)
x = nn.ReLU(x)
x = self.pool(x)
x = x.view(-1, 250880)
x = self.fc1(x)
x = self.fc2(x)
x = self.fc3(x)
x = self.fc4(x)
return x

but getting error

TypeError: max_pool2d(): argument ‘input’ (position 1) must be Tensor, not ReLU

Please help me. I am not able to understand what is wrong here

1 Like

nn.ReLu is a class, not a function.
When you do x=nn.ReLu(x) you are instantiating the class nn.ReLu, not computing a relu. You can either replace nn.ReLu by it’s corresponding nn.functional.relu or to instantiate the activation in the init:
self.relu=nn.ReLu() and replace x = nn.ReLU(x) by ``x=self.relu(x)

3 Likes

Thank you so much for your reply :blush:

1 Like