AttributeError: 'tuple' object has no attribute 't'

Hi all,
I am using a feature extraction module as given below:

class FeatureExtractionModule(nn.Module):
    def __init__(self,feature_dimension,input_channels, kernel_size = 5, dropout_p = 0.3,leakiness = 0.01):
        super(FeatureExtractionModule,self).__init__()
        # Defining the hyperparameters of the convolutinal feature extraction network
        self.feature_dimension = feature_dimension
        self.input_channels = input_channels
        self.kernel_size  = kernel_size
        self.dropout_p = dropout_p 
        self.leakiness = leakiness

        # Below defined are the smaller modules (individual operartions) in the convolutional network using the hyperparameters defined above
        self.conv1 = nn.Conv2d(self.input_channels,64,kernel_size = self.kernel_size)
        self.bn1 = nn.BatchNorm2d(64)
        self.max_pool1  = nn.MaxPool2d(2)
        self.leaky_relu1 = nn.LeakyReLU(negative_slope = self.leakiness, inplace=True)
        self.conv2 = nn.Conv2d(64,50,kernel_size = self.kernel_size)
        self.bn2 = nn.BatchNorm2d(50)
        self.drop1 = nn.Dropout2d(p=self.dropout_p)
        self.max_pool2 = nn.MaxPool2d(2)
        self.leaky_relu2 = nn.LeakyReLU(negative_slope = self.leakiness, inplace=True)
        self.flatten = nn.Flatten()

        # Defining the forward function which does the forward pass through the network - this function returns the features extracted from an image when it is passed though the feature extraction CNN
    def forward(self,x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.max_pool1(x)
        x = self.leaky_relu1(x)
        x = self.conv2(x)
        x = self.bn2(x)
        x = self.drop1(x)
        x = self.max_pool2(x)
        x = self.leaky_relu2(x)
        x = self.flatten(x)
        x = F.linear(x,(self.feature_dimension,x.shape[1]))
        return x

But, whenever I do a forward pass through this model I get the following error in the second to last line
( x = F.linear(x,(self.feature_dimension,x.shape[1]))) which is as follows:

  File "/Users/megh/Work/github-repos/iisc_project/deep-domain-adaptation/dann/mnist/scripts/train/cnn_modules.py", line 45, in forward
    x = F.linear(x,(self.feature_dimension,x.shape[1]))
  File "/Users/megh/anaconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/functional.py", line 1612, in linear
    output = input.matmul(weight.t())
AttributeError: 'tuple' object has no attribute 't'

I have been trying to find an answer to this but have not been successful so far, not sure why the linear layer throws this error. Please help if anyone has any ideas.

Thanks

Hi,

You can check the doc for F.linear here: the weights should be passed one after the other, not as a tuple of two Tensors.

Thanks for your suggestion @albanD,
When I changed it to the following, it worked:
x = F.linear(x,nn.Parameter(torch.randn(self.feature_dimension, x.shape[1])))
Also, just a small question, so here the parameters are learnable right (asking in general about the parameters of torch.nn.functional)?
Thanks

Few things:

  • The value of your bias is the size of the 1th dimension of x? Is that expected?
  • Wrapping Tensors into nn.Parameter is not doing anything if it is not set as an attribute of an nn.Module. So you should remove it here.
  • If you want to learnable paramters, you should check intros to pytorch. Namely you need to define the parameters on the nn.Module so that you can get it to your optimizer.

Hi,

  1. I am ignoring the bias term here, I just want a weight term. So, I think according to the docs the second param of F.linear must be a matrix W with (out_dim, in_dim) right?
  2. Oh, so it is of no use if I don’t define self.w = nn.Parameter(torch.randn(row, col))?
  3. Actually, my whole idea behind doing this was that I do not want to manually set a number for the output of this model since that would be anyway defined by the number of pixels that would be in my feature map just before I flatten the feature map. So, that is why I wanted to make my model code agnostic of that. Sorry if I am not clear… Please let me know if I am not clear and if there is some way to handle this… - I mean if I could just forget all this hassle and just type in numbers for whatever dimensions that I need but wanted a little better way to do that
  1. Ho sorry I misread the parenthesis, yes this is right.
  2. Yes
  3. I am not sure to understand the issue here, is it that you don’t know x.shape[1] during the init function?
    The simplest is for you to precompute it and do define the weights in the init (this will make sure the optimizer will see them properly and stuff).
    Otherwise you can do something like this in the forward:
if not hasattr(self, "weight"):
  # First forward
  self.weight = nn.Parameter(torch.randn(self.feature_dimension, x.shape[1]))
x = x = F.linear(x, self.weight)

But the problem is that you will need to wait after the first forward to create the optimizer. And you might need to move self.weight to a different device,

Thanks for the info.
Yes, that’s exactly right - I do not know (or atleast I want to assume that I do not know) the dimension of x.shape[1]. By precompute - do you mean like manually calculate the dimension and then just enter the appropriate dimensions in the __init__ function? (i.e when I am defining the individual modules like Convolutional or FC or whatever)

When you define all the other modules, (conv/pool/etc), that defines the size that your last layer needs to take. It not unknown, it is just tricky to compute :smiley:
Note that you could also provide a dummy input to the init, run part of the model to get the size of the last layer.

Yes, that’s correct. Thanks for all your help! :grin: