I am getting the following error in a custom layer.
RuntimeError: in-place operations can be only used on variables that don't share storage with any other variables, but detected that there are 2 objects sharing it
Here is the forward function of the nn
layer.
def forward(self, batch_sentence1, batch_sentence2):
""""Defines the forward computation of the matching layer."""
sequence_length = batch_sentence1.size(1)
output_variable = Variable(
torch.zeros(self.config.batch_size, sequence_length, self.num_directions, self.length))
for word_idx in range(sequence_length):
for batch_idx in range(self.config.batch_size):
v1 = batch_sentence1[batch_idx][word_idx]
v2 = batch_sentence2[batch_idx][-1]
for matching_idx in range(self.length):
weighted_v1 = torch.mul(self.weight_forward[matching_idx], v1)
weighted_v2 = torch.mul(self.weight_forward[matching_idx], v2)
cosine = weighted_v1.dot(weighted_v2)
cosine = cosine / (torch.norm(weighted_v1, 2) * torch.norm(weighted_v2, 2))
output_variable[batch_idx][word_idx][0][matching_idx] = cosine
Getting the error in the last line. I have checked if the output_variable
shares storage with other object but couldn’t find any.
Can anyone point me to the problem in my code?