Loss criterions error

input = torch.randn(3, 5, requires_grad=True)

Traceback (most recent call last):
File “”, line 1, in
TypeError: torch.randn received an invalid combination of arguments - got (int, int, requires_grad=bool), but expected one of:

  • (int … size)
    didn’t match because some of the keywords were incorrect: requires_grad
  • (torch.Size size)
  • (torch.Generator generator, int … size)
    didn’t match because some of the keywords were incorrect: requires_grad
  • (torch.Generator generator, torch.Size size)

Getting above errors for MSE Loss and cross entropy loss. Please suggest the way out

torch.randn has no “requires_grad” parameter. It returns "a tensor filled with random numbers from a normal distribution with zero mean and variance of one."
You can then wrap that Tensor in a Variable:

input = torch.autograd.Variable(torch.randn((3,5)), requires_grad=True)

Edit: updated link to docs from version 0.3.1

@Happy_NewYear torch.randn has requires_grad on the master brach. If you’re not building from source, you should go with @apsvieira’s suggestion.

1 Like

Still getting error in this code.
Can you please tell the reason?

loss = nn.MSELoss()
input = torch.autograd.Variable(torch.randn((3,5)), requires_grad=True)
target = torch.randn(3, 5)
output = loss(input, target)
output.backward()

Are you getting the same error? Could you post the error message?
I believe you should also wrap your target in a Variable. That may be the problem.

Tried that as well.
AssertionError: nn criterions don’t compute the gradient w.r.t. targets - please mark these variables as volatile or not requiring gradients

That indicates that you are using requires_grad=True for your target Variable as well, but that should be set to False, as the gradients will be computed w.r.t. the inputs.

Okay. We need to do that specifically. :slight_smile Thanks

Haven’t really tried that, but this reply indicates how you can differentiate wrt targets.
Hope it helps.