# What is +1/-1 encoding?

Hi, I found this in a paper about loss functions, but I cannot seem to find anything about it elsewhere.
the paper is this one: https://arxiv.org/pdf/1702.05659.pdf
On page 2 you can see that in order to define hinge loss and its variants, it uses a -1/+1 encoded label, instead of the one hot encoded label used for the cross entropy loss.
Can anyone help me understand how this works and maybe suggest a way of implementing it in pytorch?

One-hot encoding is `0/1`-encoding (0 for negative and 1 for positive), likewise `-1/+1` encoding. Replace 0’s with -1s. You will see this: Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1) in the docs.

thank you! Is there a way to perform this encoding in pytorch?

Not directly. But this post should help. Use `ones` instead of `zeros`, multiply with `-1`, and then `scatter`. Let me know if this is unclear.

thank you for the link. I do not understand where I need to use ones insteas of zeros

My apologies! I should have been more specific. `zero_` and not `zeros`.

``````import torch

batch_size = 5
nb_digits = 10
# Dummy input that HAS to be 2D for the scatter (you can use view(-1,1) if needed)
y = torch.LongTensor(batch_size,1).random_() % nb_digits
# One hot encoding buffer that you create out of the loop and just keep reusing
# y_onehot = torch.FloatTensor(batch_size, nb_digits)

# y_onehot.zero_()
y_onehot = -1 * torch.ones(batch_size, nb_digits).long()
y_onehot.scatter_(1, y, 1)

print(y)
print(y_onehot)
``````

thank you! I thought there was a tensor.ones that I didn’t know about.

hi, I have tried your implementation on its own and it works perfectly.
But now that I tried integrating it in my program, I get the following error:

``````--> 707         y_plusone.scatter_(1, targets, 1)
708
709         return y_plusone.to(self.device)

RuntimeError: invalid argument 3: Index tensor must be either empty or have same dimensions as output tensor at /pytorch/aten/src/THC/generic/THCTensorScatterGather.cu:318

``````

the function is the following

``````def to_plusone(self, targets):
""" Convert targets to +1/-1 encoding for hinge loss """

num_classes = self.net.fc.out_features
batch_size = targets.size()
y_plusone = -1 * torch.ones(batch_size, num_classes).long().cuda()
y_plusone.scatter_(1, targets, 1)

return y_plusone.to(self.device)
``````

the target arguments are a batch of labels from the dataloader of cifar100

I also checked the size of the targets :

``````torch.Size()
``````

while the size of the y_plusone tensor is :

``````torch.Size([64, 10])
``````

can you see something wrong? Thank you and sorry for bothering you

The problem is with the shape of your `targets`. `targets = targets.unsqueeze(1)` will do.

1 Like