[solved] Apply a `nn.Linear()` layer only to a part of a Variable

Hi Simon. I had one more doubt regarding the .data access. Suppose, I have a network which outputs probabilities of type:

>>> probs
Variable containing:
-0.1406  0.3101
[torch.FloatTensor of size 1x2]

Now, I want to define the loss using the value of the elements of probs, i.e.

if probs[0, 0] > probs[0, 1]:
    # use loss function 1
else:
    # use loss function 2

Directly using this results in
*** RuntimeError: bool value of Variable objects containing non-empty torch.ByteTensor is ambiguous.

So, instead as suggested here: How to use condition flow?, we can use something like:

if probs.data[0, 0] > probs.data[0, 1]:
    # use loss function 1
else:
    # use loss function 2

So my question is, does this not interfere with autograd? Since, how is the history to be tracked to the actual values of the first and second index of probs.

  • If it does not interfere with autograd: then please tell why not, and
  • if it does interfere with autograd: what is the correct way to do this?

Note: I am using an earlier version of PyTorch, so Variable objects occurs.

Thanks.