Have trouble getting correct dimensions/shape to feed into neural network in Pytorch

I am trying to feed a tensor of a certain size into my linear layer, but I am struggling with getting the correct dimensions (not too experienced with this)

I realize that I have to change the dimensions of my linear layer (self.linear1), but I don’t know what dimensions I should set my linear layer to. I am currently feeding in a vector of size [32, 1164].

This is summary of the print output as well as error I get.

torch.Size([32, 1152])
torch.Size([32, 1])
x.shape:  torch.Size([32, 1164])


 File "/home/vanstorm/Documents/Programming/reinforcement-learning/curiousModule.py", line 113, in forward
    y = F.relu(self.linear1(x))
  File "/home/vanstorm/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/vanstorm/.local/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 93, in forward
    return F.linear(input, self.weight, self.bias)
  File "/home/vanstorm/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1690, in linear
    ret = torch.addmm(bias, input, weight.t())

RuntimeError: mat1 dim 1 must match mat2 dim 0

Apparently my dimensions are incorrect, and I am not too experienced with handling different dimensional sizes in Pytorch. What would be a suitable dimension to set my self.linear1 and self.linear2?

Can anyone also explain the logic of the correct dimensional numbers as well so that I can avoid this. How would I calculate the desired dimensions of a linear layer?

Code I am using:

class Fnet(nn.Module): #C     Forward network
    def __init__(self):
        super(Fnet, self).__init__()
        self.linear1 = nn.Linear(300,256)
        self.linear2 = nn.Linear(256,288)



    def forward(self,state,action):
        action = action.unsqueeze(1)
        action_ = torch.zeros(action.to(device).shape[0],12).to(device) #D


        
        indices = torch.stack( (torch.arange(action.shape[0]).to(device), action.squeeze().to(device)), dim=0)
        indices = indices.tolist()
        action_[indices] = 1.
        
        x = torch.cat( (state,action_) ,dim=1)
        print(state.shape)
        print(action.shape)
        x = torch.flatten(x,start_dim = 1)
        print('x.shape: ',x.shape)

        y = F.relu(self.linear1(x))
        y = self.linear2(y)
        return y

Can anyone help me?

It looks like your first two linear layers match each other in terms of dimension, but the first layer does not match the input size. (e.g., in this case to match it should have an input dimension of 1164 :

self.linear1 = nn.Linear(1164,256)

)