nn.MSELoss not working

import torch.nn as nn
import torch

class basicLSTM(nn.Module):
    def __init__(self,args):
        super(basicLSTM, self).__init__()
        self.rnn = nn.LSTM(1,10,1,batch_first=True) # dim_input, dim_hidden, num_layer
        self.regressor = nn.Linear(10, 1)
    
    def forward(self, x, hidden):
        output, (_, _) = self.rnn(x,hidden)
        output = self.regressor(output)
        return output.squeeze(2)
    
    def init_hidden(self,device,bsz=1):
        return (torch.zeros((1,bsz, 10),device=device),torch.zeros((1,bsz, 10),device=device))

for batchNum, (feature1, feature2) in enumerate(zip(Ldata.train,Ldata.trainTarget)):
			f_ = torch.FloatTensor(feature1+feature2, device=device).unsqueeze(0).unsqueeze(2)
			target = torch.clone(f_.squeeze(2)[1:])
			out = model_(f_,hidden)
			print out
			print target
			loss = criterion(out[:-1],target)
			# print (out[:-1]-target)
			print loss

When I print out loss, it shows empty tensor.

When I print out (out-target), It shows empty tensor too.

I think the problem lies in grad_fn but not sure…

Can anyone help me in this?

out[:-1] is empty tensor and hence you are getting loss as a empty tensor
pass out instead of out[:-1] to criterion

They are not empty tensors as you can see in the picture.
Their shapes are (1,N).

@torchMaster : As per the output, you have printed the out tensor. So if out = (1,N) shouldn’t out[:-1] be an empty tensor. Also check if requires_grad=True enabled for the tensor. Hope this helps!

>>> x = torch.ones(1,30)
>>> x
tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
         1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]])
>>> x[:-1]
tensor([], size=(0, 30))

Yeah @sukhoi @bhushans23, You are right.
I got wrong on the data shape. Thank you!