Hi. I’m working on some code on NLP where I require to create specific masks. However, when i do this operation:
a[a==1] = 5, it tells me that
a==1 are not of same shape. This happens after the 8th epoch, meaning that it was working for 8 epochs but not after? Moreover, it happens randomly, maybe at the half or beginning of the training in the 8th epoch or maybe it doesn’t happen at all. If I clear the output and run the epoch again, it dosen’t happen, but happens at the following epoch. This is a showcase of my problem:
original_tensor = torch.randn(batch_size,max_sen_len) tensor_c = original_tensor.detach() max_value, _ = torch.max(tensor_c,1) max_value = max_value.unsqueeze(1) mask = torch.ones_like(tensor_c) mask[tensor_c<max_value] = 0 mask[tensor_c>=max_value] = 1 mask2 = mask.clone() values_batch = max_value.clone().squeeze(1) mask2[mask2 == 1] = 5
When i get the error, it’s like this:
shape mismatch: value tensor of shape  cannot be broadcast to indexing result of shape 
Even if i place
assert mask2.shape == (mask2==1).shape, the assertion will not run, as they are correct, but PyTorch thinks they aren’t at a particular point. So I have no clue what’s wrong. Can anybody advice on this strange phenomenon?