AttributeError: module 'torch.nn' has no attribute 'Conv2D'

I am getting the following error while trying to use Conv2D from torch.nn:
AttributeError: module 'torch.nn' has no attribute 'Conv2D'

I am wondering why it is happening?

This is the code that I’m running.

model = pretrainedmodels.__dict__['squeezenet1_1'](num_classes=1000, pretrained='imagenet')
model.last_conv = nn.Conv2D(512, 3, 1, 1)

This is my environment information:
PyTorch version: 1.5.1
Is debug build: No
CUDA used to build PyTorch: 10.2

OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
CMake version: version 3.5.1

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5

Versions of relevant libraries:
[pip3] numpy==1.19.0
[pip3] torch==1.5.1
[pip3] torchvision==0.6.1

Hi,
PyTorch uses lower case for dimension in layers.

Use nn.Conv2d instead.
https://pytorch.org/docs/master/generated/torch.nn.Conv2d.html

Bests

1 Like

Hey, I am getting this error and I don’t know what to do could anyone resolve it ?

AttributeError: module ‘torch.nn’ has no attribute ‘module’

This is the code for which I am facing the above error

import torch
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.module):
def init(self):
super().init()
self.con1 = nn.Conv2d(1, 32, 5) # here 1=input, 32=output i.e. 32 conv-features, 5=kernel size
self.con2 = nn.Conv2d(32, 64, 5)
self.con3 = nn.Conv2d(64, 128, 5)

    x = torch.randn(50,50).view(-1,1,50,50)
    self._to_linear = None
    self.convs(x)
    
    self.fc1 = nn.Linear(self._to_linear, 512)
    self.fc2 = nn.Linear(512, 2)
def convs(self, x):
    x = F.max_pool2d(F.relu(self.conv1(x)),  (2,2))
    x = F.max_pool2d(F.relu(self.conv2(x)),  (2,2))
    x = F.max_pool2d(F.relu(self.conv3(x)),  (2,2))
    
    print(x[0].shape)
    
    if self._to_linear is None:
        self._to_linear = x[0].shape*x[0].shape[1]*x[0].shape[2]
    return x

def forward(self, x):
    x = self.convs(x)
    x = x.view(-1,self._to_linear)
    x = F.relu(self.fc1(x))
    x = self.fc2(x)
    return F.softmax(x, dim=1)

net = Net()

You would have to use the uppercase M in nn.Module.