torch.Size([]) (target size != input size)

Hi, i’m trying to create a linear regression neural network. It’s my first time using pytorch, and i’m usinge multiple inputs. However i keep stubling into a problem where my target size is different to input size at the criterion function. My output has the size of [1] and the target one [], which is where i got stuck, as i don’t understand how it can be that size - it contains a number (i’m using this dataset https://www.kaggle.com/uciml/pima-indians-diabetes-database ). I tried browsing for this problem but nothing really helped me. Sorry if this is a stupid question or i’m doing something wrong, i never worked with pytorch before.
Here is the code:

pdTrain = pd.read_csv("train.csv", header=None)
pdTest = pd.read_csv("test.csv", header=None)
tmpTrain = pdTrain.values
tmpTest = pdTest.values
trainDataset = torch.from_numpy(tmpTrain).float()
testDataset = torch.from_numpy(tmpTest).float()

batch_size = 100
n_iters = 30
epochs = n_iters / (len(trainDataset) / batch_size)
epochs = int(epochs)

train_loader = torch.utils.data.DataLoader(dataset=trainDataset,
                                           batch_size=batch_size,
                                           shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=testDataset,
                                          batch_size=batch_size,
                                          shuffle=False)

class LinearRegression(nn.Module):
    def __init__(self,input_size,output_size):
        super(LinearRegression,self).__init__()
        self.linear = nn.Linear(input_size,output_size)

    def forward(self,x):
        out = self.linear(x) 
        return out

input_size = 8
output_size = 1
learning_rate = 0.002
model = LinearRegression(input_size,output_size)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(),lr=learning_rate)

for epoch in range(epochs):
    epoch += 1
    for i in enumerate(train_loader):
        for j in i[1]:
            inputs = j[0:8]
            result = j[8]
            optimizer.zero_grad()
            outputs = model(inputs)
            loss = criterion(outputs, result)  #problem
            loss.backward()
            optimizer.step()
    print('epoch {}, loss {}'.format(epoch, loss))

Hi,

A Tensor with no dimension (size []) means it is a scalar.
You can make them the same size by doing .squeeze() on the output.

Thanks, that helped with the size issue, however the loss seems to be shrinking for a couple of instances, then it goes to infinity and then to nan, where it stays for the rest of the runtime.

EDIT: changing the learning rate helped with that. However the loss seems to be jumping around, not descreasing too much, and it gets way outside 0-1 range