Two questions about CosineEmbeddingLoss

From the source code:

Tensor cosine_embedding_loss(const Tensor& input1, const Tensor& input2, const Tensor& target, double margin, int64_t reduction) {
  auto prod_sum = (input1 * input2).sum(1);
  auto mag_square1 = (input1 * input1).sum(1) + EPSILON;
  auto mag_square2 = (input2 * input2).sum(1) + EPSILON;
  auto denom = (mag_square1 * mag_square2).sqrt_();
  auto cos = prod_sum / denom;

  auto zeros = at::zeros_like(cos, LEGACY_CONTIGUOUS_MEMORY_FORMAT);
  auto pos = 1 - cos;
  auto neg = (cos - margin).clamp_min_(0);
  auto output_pos = at::where(target == 1, pos, zeros);
  auto output_neg = at::where(target == -1, neg, zeros);
  auto output = output_pos + output_neg;
  return apply_loss_reduction(output, reduction);
}

I have two questions:

  1. Is it deprecate for the three steps of calculation for prod_sum, mag_square1, mag_square2? They all calculate (input1 * input2).sum(1), or it’s because of some mechanisms like object referencing?
  2. It doesn’t check the range of target at all. I inputed a target of 0.5, but it didn’t raise an error.

@111437
Please file an issue at
https://github.com/pytorch/pytorch/issues’ , we will have people looking into it. Thanks a lot.