Hey,
I’ve been trying to implement the Weighted Approximate Pairwise Ranking Loss (WARPLoss) from https://arxiv.org/pdf/1312.4894.pdf and wanted to check with folks here if my implementation is correct since I am can’t seem to find a solid resource on writing custom layers in PyTorch.
Here’s the code:
class WARPLoss(loss.Module):
def __init__(self, num_labels=204):
super(WARPLoss, self).__init__()
self.rank_weights = [1.0/1]
for i in range(1, num_labels):
self.rank_weights.append(self.rank_weights[i-1] + (1.0/i+1))
def forward(self, input, target):
"""
:param input: Deep features tensor Variable of size batch x n_attrs.
:param target: Ground truth tensor Variable of size batch x n_attrs.
:return:
"""
batch_size = target.size()[0]
n_labels = target.size()[1]
max_num_trials = n_labels - 1
loss = 0.0
for i in range(batch_size):
for j in range(n_labels):
if target[i, j] == 1:
neg_labels_idx = np.array([idx for idx, v in enumerate(target[i, :]) if v == 0])
neg_idx = np.random.choice(neg_labels_idx, replace=False)
sample_score_margin = 1 - input[i, j] + input[i, neg_idx]
num_trials = 0
while sample_score_margin < 0 and num_trials < max_num_trials:
neg_idx = np.random.choice(neg_labels_idx, replace=False)
num_trials += 1
sample_score_margin = 1 - input[i, j] + input[i, neg_idx]
r_j = np.floor(max_num_trials / num_trials)
weight = self.rank_weights[r_j]
for k in range(n_labels):
if target[i, k] == 0:
score_margin = 1 - input[i, j] + input[i, k]
loss += (weight * torch.clamp(score_margin, min=0.0))
return loss
Would autograd work well on this code or am I doing something wrong? I can even try writing the backwards pass if that makes more sense.
UPDATE
Edited the code so that Pytorch computes the right value without complaining.