RuntimeError: mat2 must be a matrix, got 1-D tensor

new_RBM.py in v_to_h(self, v)
     37 
     38         p_h = F.sigmoid(
---> 39             F.linear((v), (w).squeeze(), bias=h_bias)
     40         ).cuda()
     41 
RuntimeError: mat2 must be a matrix, got 1-D tensor

trouble with linear input tensor. when I remove .squeeze() code then got error like this.

     37 
     38         p_h = F.sigmoid(
---> 39             F.linear((v), (w), bias=h_bias)
     40         ).cuda()
     41 

RuntimeError: The expanded size of the tensor (1) must match the existing size (13000000) at non-singleton dimension 1.  Target sizes: [1, 1].  Tensor sizes: [13000000]

same part, other error. I don’t know how solve this problem.

Can you share a minimal reproducible example of this error? (Or at least some of the source code?)

More info on the size of v and w would help solve this issue as it’s mostly likely due to the input Tensor not be compatible with the F.linear operation.

Im not sure does it helpful here is it.

    def v_to_h(self, v):
        v = (v.clone().detach()).reshape(-1, 13000000)
        h_bias = (self.h_bias.clone())        
        w = (self.W.clone())

        size_msg = '''
        {} {} {}
        '''.format(torch.flatten(v).size(), torch.flatten(w).size(), h_bias.size())
        print(size_msg)
        
        p_h = F.sigmoid(
            F.linear(torch.flatten(v), torch.flatten(w), bias=h_bias)
        ).cuda()

        sample_h = self.sample_from_p(p_h)
        return p_h, sample_h

Yeah, so the issue is that you’re using torch.flatten() on v and w, these two tensors need to be matrices and not vectors, the sizes are defined in the documentation here

hmm, okay.
I wondering why does not output size can’t be 13000000. Because torch.flatten(v) and bias size is 13000000, and original weight size 1, 13000000. Thats reason why I changed the size.

Set it up all output feature size got error.

So the issue is the size of the bias Tensor,

v.shape #returns torch.Size([1300])
weight.shape #returns torch.Size([1, 1300])
bias.shape #returns torch.Size([1300])
out = F.linear(v.unsqueeze(0), weight) #linear without bias
out.shape #returns torch.Size([1, 1])

The bias needs to be same shape as out (so just a scalar). This is the issue

really thank you… i got a other issue that overflow… i will try it to figure out. thanks a lot.