Output dimensions for Logistic regression

I am trying to build a logistic regression using Pytorch framework…
I want Binary logistic regression with “m” parameters.

Hence I thought dimension of network would be [m,2] ( 2 for two binary c;asses)
But it is giving 2*m parameters or weights.

When i changed the dimension to [m,1] , I am getting errors while running the learning program.as follows:

RuntimeError Traceback (most recent call last)
in ()
26 outputs = model(data_X)
27 labels = labels.squeeze_()
—> 28 loss = criterion(outputs, labels)
29 loss.backward()
30 optimizer.step()

3 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
–> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
914 def forward(self, input, target):
915 return F.cross_entropy(input, target, weight=self.weight,
–> 916 ignore_index=self.ignore_index, reduction=self.reduction)

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
1993 if size_average is not None or reduce is not None:
1994 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 1995 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
1822 .format(input.size(0), target.size(0)))
1823 if dim == 2:
-> 1824 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
1825 elif dim == 4:
1826 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes’ failed. at /pytorch/aten/src/THNN/generic/ClassNLLCriterion.c:94

When using neural networks for classification, you usually output as many outputs as there are classes and use a softmax-like layer to get all values between 0 and 1 to then use for CrossEntropy-like loss functions.

Indeed cross entropy expects as many weights as they are classes.

If you want to do regression to a single value, then you might want to use a different loss function.

Thank You Sir,
But I am building a model Logistic regression with “m” parameters using pytorch “nn” module…

if we use output_dim=2, we are getting 2*m weights which is not required.

If I changed output_dim=1, I am getting an error…
Is there any other way to develop logistic regression model in pytorch without nn module?

A quick google search led to this for example. But there are many people doing logistic regression with pytorch. I’m sure you can find an example that suits your needs.

And as I mentionned above, the problem you see is because in the first case you do classification with negative log likelihood loss and so 2*m weights is expected.
In the second case, you change the number of weights with the same loss so this is expected not to work.